[jira] [Commented] (YARN-6214) NullPointer Exception while querying timeline server API

2020-03-10 Thread Benjamin Kim (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056315#comment-17056315
 ] 

Benjamin Kim commented on YARN-6214:


The root cause if one of the apps is in init status, some of properties like 
application type is set to null. So if you make API call with `state=FINISHED` 
http parameter, you won't face this issue. 

However, we probably need better error handling logic.

 

> NullPointer Exception while querying timeline server API
> 
>
> Key: YARN-6214
> URL: https://issues.apache.org/jira/browse/YARN-6214
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.1
>Reporter: Ravi Teja Chilukuri
>Priority: Major
>
> The apps API works fine and give all applications, including Mapreduce and Tez
> http://:8188/ws/v1/applicationhistory/apps
> But when queried with application types with these APIs, it fails with 
> NullpointerException.
> http://:8188/ws/v1/applicationhistory/apps?applicationTypes=TEZ
> http://:8188/ws/v1/applicationhistory/apps?applicationTypes=MAPREDUCE
> NullPointerExceptionjava.lang.NullPointerException
> Blocked on this issue as we are not able to run analytics on the tez job 
> counters on the prod jobs. 
> Timeline Logs:
> |2017-02-22 11:47:57,183 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.webapp.WebServices.getApps(WebServices.java:195)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.getApps(AHSWebServices.java:96)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> Complete stacktrace:
> http://pastebin.com/bRgxVabf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6214) NullPointer Exception while querying timeline server API

2020-02-27 Thread Benjamin Kim (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047125#comment-17047125
 ] 

Benjamin Kim commented on YARN-6214:


It happened to me,

 
{code:java}
{"exception": "NullPointerException","javaClassName": 
"java.lang.NullPointerException"}{code}
Using 2.8.4, as Jason noted, it happens while checking app types.
{code:java}
2020-02-28 09:52:20,041 WARN 
org.apache.hadoop.yarn.webapp.GenericExceptionHandler 
(2070044461@qtp-1305004711-22): INTERNAL_SERVER_ERROR2020-02-28 09:52:20,041 
WARN org.apache.hadoop.yarn.webapp.GenericExceptionHandler 
(2070044461@qtp-1305004711-22): 
INTERNAL_SERVER_ERRORjava.lang.NullPointerException at 
org.apache.hadoop.yarn.server.webapp.WebServices.getApps(WebServices.java:199) 
at 
org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.getApps(AHSWebServices.java:96)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
 at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
 at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) 
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
 at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
 at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:294)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1353)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at 

[jira] [Resolved] (RANGER-345) enable-agent.sh isn't a file

2015-04-05 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim resolved RANGER-345.
-
Resolution: Not A Problem

I tried building Ranger at different network, and it works.

It was network issue so i'm closing this one down.

 enable-agent.sh isn't a file
 

 Key: RANGER-345
 URL: https://issues.apache.org/jira/browse/RANGER-345
 Project: Ranger
  Issue Type: Bug
  Components: admin
Affects Versions: 0.4.0
 Environment: centos 6.6 
 jdk1.7.0_71
 maven 3.3.1
Reporter: Benjamin Kim

 I downloaded tagged version of Ranger 0.4 from github.
 I ran this command
 mvn -DskipTests clean compile package install assembly:assembly -e
 I get this result when building Security Admin Web Application module
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5:assembly 
 (default-cli) on project security-admin-web: Failed to create assembly: Error 
 adding file to archive: 
 /root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
  isn't a file. - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5:assembly 
 (default-cli) on project security-admin-web: Failed to create assembly: Error 
 adding file to archive: 
 /root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
  isn't a file.
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
   at 
 org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to create 
 assembly: Error adding file to archive: 
 /root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
  isn't a file.
   at 
 org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:429)
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
   ... 20 more
 Caused by: org.apache.maven.plugin.assembly.archive.ArchiveCreationException: 
 Error adding file to archive: 
 /root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
  isn't a file.
   at 
 org.apache.maven.plugin.assembly.archive.phase.FileItemAssemblyPhase.execute(FileItemAssemblyPhase.java:126)
   at 
 org.apache.maven.plugin.assembly.archive.DefaultAssemblyArchiver.createArchive(DefaultAssemblyArchiver.java:190)
   at 
 org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:378)
   ... 22 more
 Caused by: org.codehaus.plexus.archiver.ArchiverException: 
 /root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
  isn't a file.
   at 
 org.codehaus.plexus.archiver.AbstractArchiver.addFile(AbstractArchiver.java:348)
   at 
 org.apache.maven.plugin.assembly.archive.archiver.AssemblyProxyArchiver.addFile(AssemblyProxyArchiver.java:448)
   at 
 

[jira] [Created] (RANGER-345) enable-agent.sh isn't a file

2015-03-27 Thread Benjamin Kim (JIRA)
Benjamin Kim created RANGER-345:
---

 Summary: enable-agent.sh isn't a file
 Key: RANGER-345
 URL: https://issues.apache.org/jira/browse/RANGER-345
 Project: Ranger
  Issue Type: Bug
  Components: admin
Affects Versions: 0.4.0
 Environment: centos 6.6 
jdk1.7.0_71
maven 3.3.1
Reporter: Benjamin Kim


I downloaded tagged version of Ranger 0.4 from github.

I ran this command
mvn -DskipTests clean compile package install assembly:assembly -e

I get this result when building Security Admin Web Application module

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5:assembly 
(default-cli) on project security-admin-web: Failed to create assembly: Error 
adding file to archive: 
/root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
 isn't a file. - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5:assembly 
(default-cli) on project security-admin-web: Failed to create assembly: Error 
adding file to archive: 
/root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
 isn't a file.
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to create 
assembly: Error adding file to archive: 
/root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
 isn't a file.
at 
org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:429)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 20 more
Caused by: org.apache.maven.plugin.assembly.archive.ArchiveCreationException: 
Error adding file to archive: 
/root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
 isn't a file.
at 
org.apache.maven.plugin.assembly.archive.phase.FileItemAssemblyPhase.execute(FileItemAssemblyPhase.java:126)
at 
org.apache.maven.plugin.assembly.archive.DefaultAssemblyArchiver.createArchive(DefaultAssemblyArchiver.java:190)
at 
org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:378)
... 22 more
Caused by: org.codehaus.plexus.archiver.ArchiverException: 
/root/incubator-ranger-ranger-0.4/security-admin/agents-common/scripts/enable-agent.sh
 isn't a file.
at 
org.codehaus.plexus.archiver.AbstractArchiver.addFile(AbstractArchiver.java:348)
at 
org.apache.maven.plugin.assembly.archive.archiver.AssemblyProxyArchiver.addFile(AssemblyProxyArchiver.java:448)
at 
org.apache.maven.plugin.assembly.archive.phase.FileItemAssemblyPhase.execute(FileItemAssemblyPhase.java:122)
... 24 more
[ERROR] 
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read 

[jira] [Updated] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2014-04-21 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated MAPREDUCE-4718:


Target Version/s: 0.23.3, 1.0.3  (was: 1.0.3, 0.23.3, 2.0.0-alpha, 
2.0.1-alpha)

 MapReduce fails If I pass a parameter as a S3 folder
 

 Key: MAPREDUCE-4718
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4718
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 1.0.0, 1.0.3
 Environment: Hadoop with default configurations
Reporter: Benjamin Kim

 I'm running a wordcount MR as follows
 hadoop jar WordCount.jar wordcount.WordCountDriver 
 s3n://bucket/wordcount/input s3n://bucket/wordcount/output
  
 s3n://bucket/wordcount/input is a s3 object that contains other input files.
 However I get following NPE error
 12/10/02 18:56:23 INFO mapred.JobClient:  map 0% reduce 0%
 12/10/02 18:56:54 INFO mapred.JobClient:  map 50% reduce 0%
 12/10/02 18:56:56 INFO mapred.JobClient: Task Id : 
 attempt_201210021853_0001_m_01_0, Status : FAILED
 java.lang.NullPointerException
 at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
 at java.io.BufferedInputStream.close(BufferedInputStream.java:451)
 at java.io.FilterInputStream.close(FilterInputStream.java:155)
 at org.apache.hadoop.util.LineReader.close(LineReader.java:83)
 at 
 org.apache.hadoop.mapreduce.lib.input.LineRecordReader.close(LineRecordReader.java:144)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:497)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 MR runs fine if i specify more specific input path such as 
 s3n://bucket/wordcount/input/file.txt
 MR fails if I pass s3 folder as a parameter
 In summary,
 This works
  hadoop jar ./hadoop-examples-1.0.3.jar wordcount 
 /user/hadoop/wordcount/input/ s3n://bucket/wordcount/output/
 This doesn't work
  hadoop jar ./hadoop-examples-1.0.3.jar wordcount 
 s3n://bucket/wordcount/input/ s3n://bucket/wordcount/output/
 (both input path are directories)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2014-04-20 Thread Benjamin Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975383#comment-13975383
 ] 

Benjamin Kim commented on MAPREDUCE-4718:
-

Hi Chen
I tested it with CDH4.5.0(hadoop-2.0.0+1518) and doesn't seem to have same 
problem. I'am able to successfully run a wordcount MRv1 job with s3n protocol.
So is it pretty safe to say this issue is fixed on other 2.x.x versions?

 MapReduce fails If I pass a parameter as a S3 folder
 

 Key: MAPREDUCE-4718
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4718
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 1.0.0, 1.0.3
 Environment: Hadoop with default configurations
Reporter: Benjamin Kim

 I'm running a wordcount MR as follows
 hadoop jar WordCount.jar wordcount.WordCountDriver 
 s3n://bucket/wordcount/input s3n://bucket/wordcount/output
  
 s3n://bucket/wordcount/input is a s3 object that contains other input files.
 However I get following NPE error
 12/10/02 18:56:23 INFO mapred.JobClient:  map 0% reduce 0%
 12/10/02 18:56:54 INFO mapred.JobClient:  map 50% reduce 0%
 12/10/02 18:56:56 INFO mapred.JobClient: Task Id : 
 attempt_201210021853_0001_m_01_0, Status : FAILED
 java.lang.NullPointerException
 at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
 at java.io.BufferedInputStream.close(BufferedInputStream.java:451)
 at java.io.FilterInputStream.close(FilterInputStream.java:155)
 at org.apache.hadoop.util.LineReader.close(LineReader.java:83)
 at 
 org.apache.hadoop.mapreduce.lib.input.LineRecordReader.close(LineRecordReader.java:144)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:497)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 MR runs fine if i specify more specific input path such as 
 s3n://bucket/wordcount/input/file.txt
 MR fails if I pass s3 folder as a parameter
 In summary,
 This works
  hadoop jar ./hadoop-examples-1.0.3.jar wordcount 
 /user/hadoop/wordcount/input/ s3n://bucket/wordcount/output/
 This doesn't work
  hadoop jar ./hadoop-examples-1.0.3.jar wordcount 
 s3n://bucket/wordcount/input/ s3n://bucket/wordcount/output/
 (both input path are directories)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6230) Hive UDAF with subquery runs all logic on reducers

2014-01-19 Thread Benjamin Kim (JIRA)
Benjamin Kim created HIVE-6230:
--

 Summary: Hive UDAF with subquery runs all logic on reducers
 Key: HIVE-6230
 URL: https://issues.apache.org/jira/browse/HIVE-6230
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.10.0
Reporter: Benjamin Kim


When I have a subquery in my custom built UDAF, all the iterate, 
terminatePartial, merge, terminate runs on reducers only, where iterate and 
terminatePartial should run on mappers.

Now I don't know if this is due to design purpose, but this behavior leads to 
very long execution time on reducers and create large temporary files from them.

This happened to me with SimpleUDAF. I haven't tested it with GenericUDAF.

Here is an example
SELECT MyUDAF(col1) FROM(
  SELECT * FROM test)
GROUP BY col2




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6470:


Attachment: SingleColumnValueFilter_HBASE_6470-trunk.patch

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
Assignee: Benjamin Kim
  Labels: patch
 Fix For: 0.96.0

 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch


 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6470:


Status: Open  (was: Patch Available)

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
Assignee: Benjamin Kim
  Labels: patch
 Fix For: 0.96.0

 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch


 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496036#comment-13496036
 ] 

Benjamin Kim commented on HBASE-6470:
-

oops I just did

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
Assignee: Benjamin Kim
  Labels: patch
 Fix For: 0.96.0

 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch


 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-12 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6470:


Fix Version/s: 0.96.0
 Assignee: Benjamin Kim
 Release Note: Changes private fields of SingleColumnValueFilter to 
protected for more subtle filtering of column values
   Status: Patch Available  (was: Open)

Changes private fields of SingleColumnValueFilter to protected for more subtle 
filtering of column values

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
Assignee: Benjamin Kim
  Labels: patch
 Fix For: 0.96.0


 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-12 Thread Benjamin Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495291#comment-13495291
 ] 

Benjamin Kim commented on HBASE-6470:
-

submitted a patch. just changed all private fields to protected

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
Assignee: Benjamin Kim
  Labels: patch
 Fix For: 0.96.0


 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-10-29 Thread Benjamin Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486073#comment-13486073
 ] 

Benjamin Kim commented on HBASE-6470:
-

I'll come back to this first thing in tomorrow, and create a patch

 SingleColumnValueFilter with private fields and methods
 ---

 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
  Labels: patch

 Why are most fields and methods declared private in SingleColumnValueFilter?
 I'm trying to extend the functions of the SingleColumnValueFilter to support 
 complex column types such as JSON, Array, CSV, etc.
 But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
 have to rewrite the codes. 
 I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2012-10-10 Thread Benjamin Kim (JIRA)
Benjamin Kim created MAPREDUCE-4718:
---

 Summary: MapReduce fails If I pass a parameter as a S3 folder
 Key: MAPREDUCE-4718
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4718
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 1.0.3, 1.0.0
 Environment: Hadoop with default configurations
Reporter: Benjamin Kim


I'm running a wordcount MR as follows

hadoop jar WordCount.jar wordcount.WordCountDriver s3n://bucket/wordcount/input 
s3n://bucket/wordcount/output
 
s3n://bucket/wordcount/input is a s3 object that contains other input files.

However I get following NPE error

12/10/02 18:56:23 INFO mapred.JobClient:  map 0% reduce 0%
12/10/02 18:56:54 INFO mapred.JobClient:  map 50% reduce 0%
12/10/02 18:56:56 INFO mapred.JobClient: Task Id : 
attempt_201210021853_0001_m_01_0, Status : FAILED
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
at java.io.BufferedInputStream.close(BufferedInputStream.java:451)
at java.io.FilterInputStream.close(FilterInputStream.java:155)
at org.apache.hadoop.util.LineReader.close(LineReader.java:83)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.close(LineRecordReader.java:144)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:497)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

MR runs fine if i specify more specific input path such as 
s3n://bucket/wordcount/input/file.txt

MR fails if I pass s3 folder as a parameter


In summary,
This works
 hadoop jar ./hadoop-examples-1.0.3.jar wordcount /user/hadoop/wordcount/input/ 
s3n://bucket/wordcount/output/

This doesn't work
 hadoop jar ./hadoop-examples-1.0.3.jar wordcount s3n://bucket/wordcount/input/ 
s3n://bucket/wordcount/output/

(both input path are directories)



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-07-28 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6470:
---

 Summary: SingleColumnValueFilter with private fields and methods
 Key: HBASE-6470
 URL: https://issues.apache.org/jira/browse/HBASE-6470
 Project: HBase
  Issue Type: Improvement
  Components: filters
Affects Versions: 0.94.0
Reporter: Benjamin Kim
 Fix For: 0.94.0


Why are most fields and methods declared private in SingleColumnValueFilter?

I'm trying to extend the functions of the SingleColumnValueFilter to support 
complex column types such as JSON, Array, CSV, etc.

But inheriting the SingleColumnValueFilter doesn't give any benefits for I have 
to rewrite the codes. 

I think all private fields and methods could turn into protected mode.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-07-13 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6288:


Attachment: HBASE-6288-trunk.patch
HBASE-6288-94.patch
HBASE-6288-92-1.patch
HBASE-6288-92.patch

 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 

 Key: HBASE-6288
 URL: https://issues.apache.org/jira/browse/HBASE-6288
 Project: HBase
  Issue Type: Task
  Components: master, scripts, shell
Affects Versions: 0.92.0, 0.92.1, 0.94.0
Reporter: Benjamin Kim
 Attachments: HBASE-6288-92-1.patch, HBASE-6288-92.patch, 
 HBASE-6288-94.patch, HBASE-6288-trunk.patch


 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 {code}
 #   HBASE_BACKUP_MASTERS File naming remote hosts.
 # Default is ${HADOOP_CONF_DIR}/backup-masters
 {code}
 it says the default backup-masters file path is at a hadoop-conf-dir, but 
 shouldn't this be HBASE_CONF_DIR?
 also adding following lines to conf/hbase-env.sh would be helpful
 {code}
 # File naming hosts on which backup HMaster will run.  
 $HBASE_HOME/conf/backup-masters by default.
 export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-07-13 Thread Benjamin Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13413921#comment-13413921
 ] 

Benjamin Kim commented on HBASE-6288:
-

It took a while for being gone for a vacation. Here goes the patches =)

 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 

 Key: HBASE-6288
 URL: https://issues.apache.org/jira/browse/HBASE-6288
 Project: HBase
  Issue Type: Task
  Components: master, scripts, shell
Affects Versions: 0.92.0, 0.92.1, 0.94.0
Reporter: Benjamin Kim
 Attachments: HBASE-6288-92-1.patch, HBASE-6288-92.patch, 
 HBASE-6288-94.patch, HBASE-6288-trunk.patch


 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 {code}
 #   HBASE_BACKUP_MASTERS File naming remote hosts.
 # Default is ${HADOOP_CONF_DIR}/backup-masters
 {code}
 it says the default backup-masters file path is at a hadoop-conf-dir, but 
 shouldn't this be HBASE_CONF_DIR?
 also adding following lines to conf/hbase-env.sh would be helpful
 {code}
 # File naming hosts on which backup HMaster will run.  
 $HBASE_HOME/conf/backup-masters by default.
 export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-06-27 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6288:
---

 Summary: In hbase-daemons.sh, description of the default 
backup-master file path is wrong
 Key: HBASE-6288
 URL: https://issues.apache.org/jira/browse/HBASE-6288
 Project: HBase
  Issue Type: Task
  Components: master, scripts, shell
Affects Versions: 0.94.0, 0.92.1, 0.92.0
Reporter: Benjamin Kim


In hbase-daemons.sh, description of the default backup-master file path is wrong

{code}
#   HBASE_BACKUP_MASTERS File naming remote hosts.
# Default is $\{HADOOP_CONF_DIR\}/backup-masters
{code}

it says the default backup-masters file path is at a hadoop-conf-dir, but 
shouldn't this be HBASE_CONF_DIR?

also adding following lines to conf/hbase-env.sh would be helpful
{code}
# File naming hosts on which backup HMaster will run.  
$HBASE_HOME/conf/backup-masters by default.
export HBASE_BACKUP_MASTERS=$\{HBASE_HOME\}/conf/backup-masters
{code}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-06-27 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6288:


Description: 
In hbase-daemons.sh, description of the default backup-master file path is wrong

{code}
#   HBASE_BACKUP_MASTERS File naming remote hosts.
# Default is ${HADOOP_CONF_DIR}/backup-masters
{code}

it says the default backup-masters file path is at a hadoop-conf-dir, but 
shouldn't this be HBASE_CONF_DIR?

also adding following lines to conf/hbase-env.sh would be helpful
{code}
# File naming hosts on which backup HMaster will run.  
$HBASE_HOME/conf/backup-masters by default.
export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
{code}



  was:
In hbase-daemons.sh, description of the default backup-master file path is wrong

{code}
#   HBASE_BACKUP_MASTERS File naming remote hosts.
# Default is $\{HADOOP_CONF_DIR\}/backup-masters
{code}

it says the default backup-masters file path is at a hadoop-conf-dir, but 
shouldn't this be HBASE_CONF_DIR?

also adding following lines to conf/hbase-env.sh would be helpful
{code}
# File naming hosts on which backup HMaster will run.  
$HBASE_HOME/conf/backup-masters by default.
export HBASE_BACKUP_MASTERS=$\{HBASE_HOME\}/conf/backup-masters
{code}




 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 

 Key: HBASE-6288
 URL: https://issues.apache.org/jira/browse/HBASE-6288
 Project: HBase
  Issue Type: Task
  Components: master, scripts, shell
Affects Versions: 0.92.0, 0.92.1, 0.94.0
Reporter: Benjamin Kim

 In hbase-daemons.sh, description of the default backup-master file path is 
 wrong
 {code}
 #   HBASE_BACKUP_MASTERS File naming remote hosts.
 # Default is ${HADOOP_CONF_DIR}/backup-masters
 {code}
 it says the default backup-masters file path is at a hadoop-conf-dir, but 
 shouldn't this be HBASE_CONF_DIR?
 also adding following lines to conf/hbase-env.sh would be helpful
 {code}
 # File naming hosts on which backup HMaster will run.  
 $HBASE_HOME/conf/backup-masters by default.
 export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6132) ColumnCountGetFilter PageFilter not working with FilterList

2012-05-30 Thread Benjamin Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Kim updated HBASE-6132:


Description: 
Thanks to Anoop and Ramkrishna, here's what we found with FilterList

If I use FilterList to include ColumnCountGetFilter among other filters, the 
returning Result has no keyvalues.

This problem seems to occur when specified column count is less then actual 
number of existing columns.

Also same problem arises with PageFilter

Following is the code of the problem:

{code}
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, test);
Get get = new Get(Bytes.toBytes(test1));
FilterList filterList = new FilterList();
filterList.addFilter(new ColumnCountGetFilter(100));   
get.setFilter(filterList);
Result r = table.get(get);
System.out.println(r.size()); // prints zero
{code}

  was:
Thanks to Anoop and Ramkrishna, here's what we found with FilterList

If I use FilterList to include ColumnCountGetFilter among other filters, the 
returning Result has no keyvalues.

This problem seems to occur when specified column count is less then actual 
number of existing columns.

Also same problem arises with PageFilter

Following is the code of the problem:

Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, test);
Get get = new Get(Bytes.toBytes(test1));
FilterList filterList = new FilterList();
filterList.addFilter(new ColumnCountGetFilter(100));   
get.setFilter(filterList);
Result r = table.get(get);
System.out.println(r.size()); // prints zero


 ColumnCountGetFilter  PageFilter not working with FilterList
 -

 Key: HBASE-6132
 URL: https://issues.apache.org/jira/browse/HBASE-6132
 Project: HBase
  Issue Type: Bug
  Components: filters
Affects Versions: 0.92.0, 0.92.1, 0.94.0
 Environment: Cent OS 5.5 distributed hbase cluster. Hadoop 1.0.0, 
 zookeeper 3.4.3
Reporter: Benjamin Kim

 Thanks to Anoop and Ramkrishna, here's what we found with FilterList
 If I use FilterList to include ColumnCountGetFilter among other filters, the 
 returning Result has no keyvalues.
 This problem seems to occur when specified column count is less then actual 
 number of existing columns.
 Also same problem arises with PageFilter
 Following is the code of the problem:
 {code}
 Configuration conf = HBaseConfiguration.create();
 HTable table = new HTable(conf, test);
 Get get = new Get(Bytes.toBytes(test1));
 FilterList filterList = new FilterList();
 filterList.addFilter(new ColumnCountGetFilter(100));   
 get.setFilter(filterList);
 Result r = table.get(get);
 System.out.println(r.size()); // prints zero
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6132) ColumnCountGetFilter PageFilter not working with FilterList

2012-05-29 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6132:
---

 Summary: ColumnCountGetFilter  PageFilter not working with 
FilterList
 Key: HBASE-6132
 URL: https://issues.apache.org/jira/browse/HBASE-6132
 Project: HBase
  Issue Type: Bug
  Components: filters
Affects Versions: 0.94.0, 0.92.1, 0.92.0
 Environment: Cent OS 5.5 distributed hbase cluster. Hadoop 1.0.0, 
zookeeper 3.4.3
Reporter: Benjamin Kim


Thanks to Anoop and Ramkrishna, here's what we found with FilterList

If I use FilterList to include ColumnCountGetFilter among other filters, the 
returning Result has no keyvalues.

This problem seems to occur when specified column count is less then actual 
number of existing columns.

Also same problem arises with PageFilter

Following is the code of the problem:

Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, test);
Get get = new Get(Bytes.toBytes(test1));
FilterList filterList = new FilterList();
filterList.addFilter(new ColumnCountGetFilter(100));   
get.setFilter(filterList);
Result r = table.get(get);
System.out.println(r.size()); // prints zero

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira