Accumulo-1.7-Integration-Tests - Build # 66 - Unstable!

2015-06-26 Thread elserj
Accumulo-1.7-Integration-Tests - Build # 66 - Unstable:

Check console output at 
https://secure.penguinsinabox.com/jenkins/job/Accumulo-1.7-Integration-Tests/66/
 to view the results.

[jira] [Resolved] (ACCUMULO-3795) flush the scan buffer if non-empty after configured timeout

2015-06-26 Thread Eric Newton (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Newton resolved ACCUMULO-3795.
---
Resolution: Fixed

 flush the scan buffer if non-empty after configured timeout
 ---

 Key: ACCUMULO-3795
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3795
 Project: Accumulo
  Issue Type: New Feature
  Components: tserver
Affects Versions: 1.6.2
Reporter: Ivan Bella
Assignee: Eric Newton
 Fix For: 1.8.0

  Time Spent: 10m
  Remaining Estimate: 0h

 For scans that may take a long time because of some underlying iterator that 
 has to scan many keys and only returns one every so often, it would be great 
 if we could force results to be returned to the user before the buffer has 
 been completely filled (table.scan.max.memory).  I propose that this be time 
 based.  Perhaps we would add a configuration property called something like 
 table.scan.flush.ms.  Note that the buffer would only be flushed (i.e. 
 returned to the client) iff it is non-empty, and the table.scan.flush.ms 
 threshold has been reached since the beginning of the nextBatch call in 
 Tablet.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3922) Update Documentation for proxy client use

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603060#comment-14603060
 ] 

Josh Elser commented on ACCUMULO-3922:
--

Also, I don't see you listed on our [Contributors 
page|http://accumulo.apache.org/people.html]. Let me know if you'd like to be 
added. We'd be honored to recognize your continued contributions!

 Update Documentation for proxy client use
 -

 Key: ACCUMULO-3922
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3922
 Project: Accumulo
  Issue Type: Bug
  Components: docs
Affects Versions: 1.7.0
Reporter: Charles Ott
Assignee: Charles Ott
Priority: Trivial
  Labels: documentation
 Fix For: 1.7.1, 1.8.0

  Time Spent: 20m
  Remaining Estimate: 0h

 Would like to make the proxy information more verbose, as it seems to be 
 useful for more scure implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3905) RowDeletingIterator does not work if columns are specified

2015-06-26 Thread Keith Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603093#comment-14603093
 ] 

Keith Turner commented on ACCUMULO-3905:


I have a [GH 
branch][https://github.com/keith-turner/accumulo/tree/ACCUMULO-3905] created 
from 1.6 where I have implemented allowing iterators to add column families.   
While working on this change I ran into a case where RowFilterTest was relying 
on the current screwy behavior.   It took me a bit to figure out what was going 
on and to fix it.   Dealing with this had made me hesitant to change the 
behavior in 1.6 because users may be relying on the current screwy behavior.  
They may be relying on it w/o realizing it, in that the current behavior just 
makes something work as expected for them.  

I'm mulling over not changing the behavior and instead changing the javadoc in 
1.6  1.7 for fetchColumns to clearly document the screwy behavior w.r.t to 
iterators.  Also thinking the behavior should not change in 1.8, but that it 
should just be deprecated in 1.8 with its screwy behavior intact.  Could also 
add documentation to some iterators.



 RowDeletingIterator does not work if columns are specified
 --

 Key: ACCUMULO-3905
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3905
 Project: Accumulo
  Issue Type: Bug
  Components: tserver
Affects Versions: 1.5.0, 1.6.0
Reporter: Eric Newton
Assignee: Keith Turner
 Fix For: 1.8.0


 (from the mailing list):
 {quote}
 It seem that there might be a bug in RowDeletingIterator:
 after using RowDeletingIterator I get expected results when querying by rowId 
 and CF, e.g. 
 scan \-b myrowid  \-c field/abc \-t table  doesn't return deleted rows 
 as expected
 however if I add column qualified to the query, I see deleted items.
 scan \-b myrowid  \-c field/abc:sample_qualifier \-t table -- returns 
 deleted rows
 After major compaction the problem goes away. 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3918) Different locality groups can compact with different iterator stacks

2015-06-26 Thread Keith Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603131#comment-14603131
 ] 

Keith Turner commented on ACCUMULO-3918:


RE reusing the iterator stack vs config snapshot.   I had a hunch that using a 
config snapshot would be quick to implement cleanly and that reusing iterator 
stacks would not.  However I may be totally wrong about this.  I would not want 
to dissuade anyone from exploring into reusing iterator stacks as a possible 
solution.

 Different locality groups can compact with different iterator stacks
 

 Key: ACCUMULO-3918
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3918
 Project: Accumulo
  Issue Type: Improvement
  Components: tserver
Affects Versions: 1.6.0
Reporter: John Vines

 While looking through the compactor code, I noticed that we load the iterator 
 stack for each locality group written and drop it when we're done. This means 
 if a user reconfigures iterators while a locality group is being written, the 
 following locality groups will be compacted inconsistently with the rest of 
 the file.
 We should really read the stack once and be consistent for the entire file 
 written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3922) Update Documentation for proxy client use

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603057#comment-14603057
 ] 

Josh Elser commented on ACCUMULO-3922:
--

Thanks for the patch [~charlescva]!

This patch looks good, but in the future it would be easier to attach it as a 
file to the JIRA issue instead of posting it in a comment.

I'm in the process of committing this. I've removed some extra whitespace and 
added one extra line to the example code to be entirely explicit. I hope you 
don't mind.

{code}
+cellsToUpdate.put(rowid, updates);
{code}

Thanks again for the contribution.

 Update Documentation for proxy client use
 -

 Key: ACCUMULO-3922
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3922
 Project: Accumulo
  Issue Type: Bug
  Components: docs
Affects Versions: 1.7.0
Reporter: Charles Ott
Assignee: Charles Ott
Priority: Trivial
  Labels: documentation
 Fix For: 1.7.1, 1.8.0


 Would like to make the proxy information more verbose, as it seems to be 
 useful for more scure implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ACCUMULO-3922) Update Documentation for proxy client use

2015-06-26 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated ACCUMULO-3922:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Update Documentation for proxy client use
 -

 Key: ACCUMULO-3922
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3922
 Project: Accumulo
  Issue Type: Bug
  Components: docs
Affects Versions: 1.7.0
Reporter: Charles Ott
Assignee: Charles Ott
Priority: Trivial
  Labels: documentation
 Fix For: 1.7.1, 1.8.0

  Time Spent: 20m
  Remaining Estimate: 0h

 Would like to make the proxy information more verbose, as it seems to be 
 useful for more scure implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603015#comment-14603015
 ] 

Josh Elser commented on ACCUMULO-3783:
--

[~dlmar...@comcast.net], I'm pulling down 2.1-SNAPSHOT of commons-vfs now and 
will throw that up against 1.8 to see if I can get _something_ to run.

Can you confirm for me what {{general.vfs.classpaths}} should look like? The 
blog you wrote said a directory in HDFS would work, but reading the code it 
seemed like it was looking for a java regex. I couldn't get local FS or HDFS to 
work at all yesterday. Thanks.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 

[jira] [Updated] (ACCUMULO-3922) Update Documentation for proxy client use

2015-06-26 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated ACCUMULO-3922:
-
Fix Version/s: (was: 1.7.0)
   1.8.0
   1.7.1

 Update Documentation for proxy client use
 -

 Key: ACCUMULO-3922
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3922
 Project: Accumulo
  Issue Type: Bug
  Components: docs
Affects Versions: 1.7.0
Reporter: Charles Ott
Assignee: Charles Ott
Priority: Trivial
  Labels: documentation
 Fix For: 1.7.1, 1.8.0


 Would like to make the proxy information more verbose, as it seems to be 
 useful for more scure implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Accumulo-Master - Build # 1644 - Fixed

2015-06-26 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-Master (build #1644)

Status: Fixed

Check console output at https://builds.apache.org/job/Accumulo-Master/1644/ to 
view the results.

[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603254#comment-14603254
 ] 

Dave Marion commented on ACCUMULO-3783:
---

[~elserj] Take a look at the example config[1]. Basically, set 
general.classpaths and general.vfs.classpaths. You need 'location/.*.jar' if 
you only want to pick up jar files in that location. Short on time right now, 
but I can help solve issues if you have them over the weekend or later tonight. 
FYI, I have only deployed this with 1.6. Not sure if changes are necessary for 
1.7 or 1.8, but happy to help.

[1] 
https://github.com/apache/accumulo/blob/8c1d2d0c147220ca375006a8a7e7e481241651a7/assemble/conf/examples/vfs-classloader/accumulo-site.xml

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 

[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603261#comment-14603261
 ] 

Josh Elser commented on ACCUMULO-3783:
--

[~dlmarion], I'll cross-reference against the example configs. I had looked at 
the blog post you wrote and that didn't have the {{.*.jar}} which sent me down 
a rabbit hole. Knowing what is supposed to work is helpful. Thanks for taking a 
moment to write back.

I'll report back how things go.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 

[jira] [Resolved] (ACCUMULO-3327) tablet server re-reads the bulk loaded flags with every bulk import request

2015-06-26 Thread Eric Newton (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Newton resolved ACCUMULO-3327.
---
Resolution: Fixed

 tablet server re-reads the bulk loaded flags with every bulk import request
 ---

 Key: ACCUMULO-3327
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3327
 Project: Accumulo
  Issue Type: Sub-task
  Components: tserver
Affects Versions: 1.5.1, 1.6.0, 1.6.1
Reporter: Eric Newton
Assignee: Eric Newton
 Fix For: 1.8.0

  Time Spent: 20m
  Remaining Estimate: 0h

 On a very large cluster, which bulk loads many thousands of files every few 
 minutes, I noticed the servers would reload the bulk imported flags with 
 every request.  This put a lot of pressure on the accumulo.metadata table, 
 and it just isn't necessary: the tablet should be tracking which bulk import 
 files it has loaded, except when it first loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603298#comment-14603298
 ] 

Josh Elser commented on ACCUMULO-3783:
--

One more quick question for you, Dave. Is it expected that I can run with only 
accumulo-start and slf4j/log jars in general.classpaths with everything else 
loaded via VFS? I had thought that was the point, but the example configs do 
still list all of the accumulo-*.jar in general.classpaths

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 

[jira] [Commented] (ACCUMULO-3905) RowDeletingIterator does not work if columns are specified

2015-06-26 Thread Keith Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603334#comment-14603334
 ] 

Keith Turner commented on ACCUMULO-3905:


bq. Given that this changes behavior, and that the best solution we have still 
has a pretty big limitation

I agree.  After the change the behavior is still strange and confusing.  

 RowDeletingIterator does not work if columns are specified
 --

 Key: ACCUMULO-3905
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3905
 Project: Accumulo
  Issue Type: Bug
  Components: tserver
Affects Versions: 1.5.0, 1.6.0
Reporter: Eric Newton
Assignee: Keith Turner
 Fix For: 1.8.0


 (from the mailing list):
 {quote}
 It seem that there might be a bug in RowDeletingIterator:
 after using RowDeletingIterator I get expected results when querying by rowId 
 and CF, e.g. 
 scan \-b myrowid  \-c field/abc \-t table  doesn't return deleted rows 
 as expected
 however if I add column qualified to the query, I see deleted items.
 scan \-b myrowid  \-c field/abc:sample_qualifier \-t table -- returns 
 deleted rows
 After major compaction the problem goes away. 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3905) RowDeletingIterator does not work if columns are specified

2015-06-26 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603278#comment-14603278
 ] 

Christopher Tubbs commented on ACCUMULO-3905:
-

Given that this changes behavior, and that the best solution we have still has 
a pretty big limitation, I don't think it should be applied to 1.6 or 1.7. For 
1.8, I'm -0 on making the change. By that, I mean that I won't object to it 
being committed, but I'm leaning away from it.

 RowDeletingIterator does not work if columns are specified
 --

 Key: ACCUMULO-3905
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3905
 Project: Accumulo
  Issue Type: Bug
  Components: tserver
Affects Versions: 1.5.0, 1.6.0
Reporter: Eric Newton
Assignee: Keith Turner
 Fix For: 1.8.0


 (from the mailing list):
 {quote}
 It seem that there might be a bug in RowDeletingIterator:
 after using RowDeletingIterator I get expected results when querying by rowId 
 and CF, e.g. 
 scan \-b myrowid  \-c field/abc \-t table  doesn't return deleted rows 
 as expected
 however if I add column qualified to the query, I see deleted items.
 scan \-b myrowid  \-c field/abc:sample_qualifier \-t table -- returns 
 deleted rows
 After major compaction the problem goes away. 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3918) Different locality groups can compact with different iterator stacks

2015-06-26 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603235#comment-14603235
 ] 

John Vines commented on ACCUMULO-3918:
--

I mean, ultimately it's just a reseek with a new column family filter, I would 
think...

 Different locality groups can compact with different iterator stacks
 

 Key: ACCUMULO-3918
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3918
 Project: Accumulo
  Issue Type: Improvement
  Components: tserver
Affects Versions: 1.6.0
Reporter: John Vines

 While looking through the compactor code, I noticed that we load the iterator 
 stack for each locality group written and drop it when we're done. This means 
 if a user reconfigures iterators while a locality group is being written, the 
 following locality groups will be compacted inconsistently with the rest of 
 the file.
 We should really read the stack once and be consistent for the entire file 
 written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603604#comment-14603604
 ] 

Josh Elser commented on ACCUMULO-3783:
--

Cool, thanks again for the details, Dave. It seems like commons-vfs2 
2.1-SNAPSHOT is working a lot better than 2.0 did. As in, like, at all. We 
should be do a better job at advertising this fun detail. I've at least got 
classloader from a local file via general.vfs.classpaths. Will continue to poke 
around.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 

[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603621#comment-14603621
 ] 

Dave Marion commented on ACCUMULO-3783:
---

Yes, VFS-487 is fixed also. I just wish they would release it. To be fair, VFS 
2.0 does work and I have been using it in production for quite a while. 
Hopefully 2.1 will remove some of the annoyances.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 

[jira] [Comment Edited] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603621#comment-14603621
 ] 

Dave Marion edited comment on ACCUMULO-3783 at 6/26/15 9:29 PM:


Yes, VFS-487 is fixed also. I just wish they would release it. To be fair, VFS 
2.0 does work and I have been using it in production for quite a while. 
Hopefully 2.1 will remove some of the annoyances.

 Edit: VFS-487 should resolve ACCUMULO-1507


was (Author: dlmarion):
Yes, VFS-487 is fixed also. I just wish they would release it. To be fair, VFS 
2.0 does work and I have been using it in production for quite a while. 
Hopefully 2.1 will remove some of the annoyances.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 

[jira] [Resolved] (ACCUMULO-3843) Release 1.5.3

2015-06-26 Thread Christopher Tubbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Tubbs resolved ACCUMULO-3843.
-
Resolution: Done

 Release 1.5.3
 -

 Key: ACCUMULO-3843
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3843
 Project: Accumulo
  Issue Type: Task
Reporter: Christopher Tubbs
Assignee: Christopher Tubbs
Priority: Minor
  Time Spent: 2h 20m
  Remaining Estimate: 0h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603626#comment-14603626
 ] 

Dave Marion commented on ACCUMULO-3783:
---

That could be. We are using it with 1.6.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 Filesystem closed
   java.io.IOException: Filesystem closed
   at 
 

[jira] [Commented] (ACCUMULO-925) Launch scripts should use a PIDfile

2015-06-26 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603638#comment-14603638
 ] 

Billie Rinaldi commented on ACCUMULO-925:
-

I have a feeling we need to add a command line parameter that will allow us to 
specify which conf dir to use.

 Launch scripts should use a PIDfile
 ---

 Key: ACCUMULO-925
 URL: https://issues.apache.org/jira/browse/ACCUMULO-925
 Project: Accumulo
  Issue Type: Improvement
  Components: scripts
Reporter: Christopher Tubbs
Assignee: Billie Rinaldi
 Fix For: 1.8.0

 Attachments: ACCUMULO-925.1.patch, ACCUMULO-925.2.patch


 Start scripts should create PIDfiles to store the PID of running processes in 
 a well known location (example: /var/run/accumulo/tserver.pid or 
 $ACCUMULO_HOME/tserver.pid), for the following benefits:
 # Identify running services on a machine without executing and parsing the 
 system process list, so stop scripts can kill them when they are unresponsive.
 # Prevent multiple instances of the same application from starting up (an 
 environment variable for the location of the PIDfile can be used to allow 
 multiple instances if it is desirable to do so).
 # Potentially provide an alternate mechanism for terminating a process by 
 deleting its PIDfile rather than its lock in Zookeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ACCUMULO-2917) Accumulo shell remote debugger settings.

2015-06-26 Thread Christopher Tubbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Tubbs updated ACCUMULO-2917:

Fix Version/s: 1.8.0

 Accumulo shell remote debugger settings.
 

 Key: ACCUMULO-2917
 URL: https://issues.apache.org/jira/browse/ACCUMULO-2917
 Project: Accumulo
  Issue Type: Improvement
  Components: scripts
Reporter: Vicky Kak
Assignee: Vicky Kak
Priority: Trivial
 Fix For: 1.8.0

 Attachments: ACCUMULO-2917-2.patch, ACCUMULO-2917.patch


 While trying to get the remote debugger running with accumulo I figured that 
 for the accumulo shell command we need to introduce the following changes
 1) test -z $ACCUMULO_SHELL_OPTSexport ACCUMULO_SHELL_OPTS=-Xmx128m 
 -Xms64m -Xrunjdwp:server=y,transport=dt_socket,address=4002,suspend=n
 in accumulo-env.sh
 2)
 include the additional case in accumulo.sh
 shell)  export ACCUMULO_OPTS=${ACCUMULO_GENERAL_OPTS} 
 ${ACCUMULO_SHELL_OPTS} ;;
 We can't define the debugger port in the $ACCUMULO_OTHER_OPTS in the 
 accumulo-env.sh as that would be bind when start-all.sh is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ACCUMULO-3924) NPE running randomwalk

2015-06-26 Thread Josh Elser (JIRA)
Josh Elser created ACCUMULO-3924:


 Summary: NPE running randomwalk
 Key: ACCUMULO-3924
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3924
 Project: Accumulo
  Issue Type: Bug
  Components: test
Reporter: Josh Elser
 Fix For: 1.8.0


{noformat}
java.lang.NullPointerException
at javax.xml.validation.SchemaFactory.newSchema(SchemaFactory.java:670)
at 
org.apache.accumulo.test.randomwalk.Module.loadFromXml(Module.java:496)
at org.apache.accumulo.test.randomwalk.Module.init(Module.java:179)
at 
org.apache.accumulo.test.randomwalk.Framework.getNode(Framework.java:84)
at org.apache.accumulo.test.randomwalk.Framework.run(Framework.java:58)
at 
org.apache.accumulo.test.randomwalk.Framework.main(Framework.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.accumulo.start.Main$2.run(Main.java:130)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603585#comment-14603585
 ] 

Dave Marion commented on ACCUMULO-3783:
---

[~elserj] I think you need accumulo-start and the commons-vfs jar local, on 
each node. Take a look at the bootstrap_hdfs.sh script, it sets it up for you. 
I was trying to get to the point of versioning accumulo-start seperately from 
the rest of Accumulo so that you could just drop in a different version of 
Accumulo in HDFS and run with it. You would likely need to restart the TServers 
and other components though as they would not pick up the changes in the parent 
classloader. 

Having said that, you can also keep all of the Accumulo jars local to each node 
and only put your application jars in HDFS. Not sure if you followed along 
during the development of this, but you can also have per-table classpaths 
which is pretty cool. You define a named context, let's call it 'foo'. Then you 
configure your tables to use the 'foo' classpath context. 'foo' points to some 
directory in HDFS which has all of your application jars. Now, lets say that 
you want to test a new version of your iterators at scale. Define 'foo2', put 
the new jars in HDFS, clone your tables, change your table classpath contexts 
for the clone 'foo2' and now you can test at scale without taking up any space 
(assuming you are not ingesting, it would be really nice to disable compactions 
on a clone).

Let me know if you need any help...

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 

[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603624#comment-14603624
 ] 

Josh Elser commented on ACCUMULO-3783:
--

bq. To be fair, VFS 2.0 does work and I have been using it in production for 
quite a while

Hrm, well, at least 2.0 didn't work me at yesterday. Maybe that's just in 
1.8.0-SNAPSHOT. Who knows.

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   

[jira] [Commented] (ACCUMULO-3783) Unexpected Filesystem Closed exceptions

2015-06-26 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603633#comment-14603633
 ] 

Dave Marion commented on ACCUMULO-3783:
---

[~elserj] I just noticed VFS-570. Might be worth a look to make sure nothing 
gets broken

 Unexpected Filesystem Closed exceptions
 ---

 Key: ACCUMULO-3783
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3783
 Project: Accumulo
  Issue Type: Bug
  Components: master, start, tserver
Affects Versions: 1.7.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.7.1, 1.8.0

 Attachments: ACCUMULO-3783.patch


 Noticed this in testing 1.7.0 on my laptop tonight. Started two tservers, one 
 continuous ingest client and would kill/restart one of the tservers 
 occasionally. 
 {noformat}
 Failed to close map file
   java.io.IOException: Filesystem closed
   at 
 org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:629)
   at java.io.FilterInputStream.close(FilterInputStream.java:181)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.close(CachableBlockFile.java:409)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Reader.close(RFile.java:921)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:391)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 null
   java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1622)
   at 
 org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
   at java.io.DataOutputStream.write(DataOutputStream.java:107)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(SimpleBufferedOutputStream.java:39)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(SimpleBufferedOutputStream.java:68)
   at 
 org.apache.hadoop.io.compress.CompressionOutputStream.flush(CompressionOutputStream.java:69)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(Compression.java:66)
   at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:141)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(BCFile.java:233)
   at 
 org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(BCFile.java:320)
   at 
 org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(CachableBlockFile.java:121)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(RFile.java:398)
   at 
 org.apache.accumulo.core.file.rfile.RFile$Writer.append(RFile.java:382)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(Compactor.java:356)
   at 
 org.apache.accumulo.tserver.tablet.Compactor.call(Compactor.java:214)
   at 
 org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:1981)
   at 
 org.apache.accumulo.tserver.tablet.Tablet.majorCompact(Tablet.java:2098)
   at 
 org.apache.accumulo.tserver.tablet.CompactionRunner.run(CompactionRunner.java:44)
   at 
 org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
   at java.lang.Thread.run(Thread.java:745)
 Filesystem closed
   

[jira] [Commented] (ACCUMULO-3924) NPE running randomwalk

2015-06-26 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-3924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603652#comment-14603652
 ] 

Josh Elser commented on ACCUMULO-3924:
--

ACCUMULO-3871 broke it. Moved module.xsd was moved back to src/test/resources

 NPE running randomwalk
 --

 Key: ACCUMULO-3924
 URL: https://issues.apache.org/jira/browse/ACCUMULO-3924
 Project: Accumulo
  Issue Type: Bug
  Components: test
Reporter: Josh Elser
 Fix For: 1.8.0


 {noformat}
 java.lang.NullPointerException
 at 
 javax.xml.validation.SchemaFactory.newSchema(SchemaFactory.java:670)
 at 
 org.apache.accumulo.test.randomwalk.Module.loadFromXml(Module.java:496)
 at org.apache.accumulo.test.randomwalk.Module.init(Module.java:179)
 at 
 org.apache.accumulo.test.randomwalk.Framework.getNode(Framework.java:84)
 at 
 org.apache.accumulo.test.randomwalk.Framework.run(Framework.java:58)
 at 
 org.apache.accumulo.test.randomwalk.Framework.main(Framework.java:119)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at org.apache.accumulo.start.Main$2.run(Main.java:130)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Accumulo-1.6-Integration-Tests - Build # 427 - Unstable!

2015-06-26 Thread elserj
Accumulo-1.6-Integration-Tests - Build # 427 - Unstable:

Check console output at 
https://secure.penguinsinabox.com/jenkins/job/Accumulo-1.6-Integration-Tests/427/
 to view the results.

Accumulo-Master-Integration-Tests - Build # 331 - Aborted!

2015-06-26 Thread elserj
Accumulo-Master-Integration-Tests - Build # 331 - Aborted:

Check console output at 
https://secure.penguinsinabox.com/jenkins/job/Accumulo-Master-Integration-Tests/331/
 to view the results.