Re: Problem Updating Stats

2016-03-19 Thread Benjamin Kim
, Ben > On Mar 15, 2016, at 11:59 PM, Ankit Singhal <ankitsingha...@gmail.com> wrote: > > Yes it seems to. > Did you get any error related to SYSTEM.STATS when the client is connected > first time ? > > can you please describe your system.stats table and paste the output

Re: Problem Updating Stats

2016-03-19 Thread Benjamin Kim
> and now you are using old client to connect with it. > "Update statistics" command and guideposts will not work with old client > after upgradation to 4.7, you need to use the new client for such operations. > > On Wed, Mar 16, 2016 at 10:55 PM, Benjamin Kim <bbuil

Re: Problem Updating Stats

2016-03-18 Thread Benjamin Kim
t. > "Update statistics" command and guideposts will not work with old client > after upgradation to 4.7, you need to use the new client for such operations. > > On Wed, Mar 16, 2016 at 10:55 PM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> w

Re: HBase Interpreter

2016-03-15 Thread Benjamin Kim
have some time > next week. > > > _________ > From: Benjamin Kim <bbuil...@gmail.com <mailto:bbuil...@gmail.com>> > Sent: Tuesday, February 23, 2016 6:19 PM > Subject: Re: HBase Interpreter > To: <users@zeppelin.incubator.apache.org

Re: Data Export

2016-03-15 Thread Benjamin Kim
uld be reached. You can also take a look and express your > opinion. Community may let us know if I'm missing something. > > Best, > Khalid > > On Sat, Feb 27, 2016 at 2:23 AM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > I don’t kno

Re: S3 Zip File Loading Advice

2016-03-15 Thread Benjamin Kim
il.com> wrote: > > Could you wrap the ZipInputStream in a List, since a subtype of > TraversableOnce[?] is required? > > case (name, content) => List(new ZipInputStream(content.open)) > > Xinh > > On Wed, Mar 9, 2016 at 7:07 AM, Benjamin Kim <bbuil...@gmail.com

Problem Updating Stats

2016-03-15 Thread Benjamin Kim
When trying to run update status on an existing table in hbase, I get error: Update stats: UPDATE STATISTICS "ops_csv" ALL error: ERROR 504 (42703): Undefined column. columnName=REGION_NAME Looks like the meta data information is messed up, ie. there is no column with name REGION_NAME in this

Re: Spark Job on YARN accessing Hbase Table

2016-03-13 Thread Benjamin Kim
is not in branch-1. > > compressionByName() resides in class with @InterfaceAudience.Private which > got moved in master branch. > > So looks like there is some work to be done for backporting to branch-1 :-) > > On Sun, Mar 13, 2016 at 1:35 PM, Benjamin Kim <bbuil...@gmail

Re: Spark Job on YARN accessing Hbase Table

2016-03-13 Thread Benjamin Kim
th hbase 1.0 > > Cheers > > On Sun, Mar 13, 2016 at 11:39 AM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Hi Ted, > > I see that you’re working on the hbase-spark module for hbase. I recently > packaged the SparkOnHBase project and

Re: Spark Job on YARN accessing Hbase Table

2016-03-13 Thread Benjamin Kim
1.0 root dir and add the following to root pom.xml: > hbase-spark > > Then you would be able to build the module yourself. > > hbase-spark module uses APIs which are compatible with hbase 1.0 > > Cheers > > On Sun, Mar 13, 2016 at 11:39 AM, Benjamin Kim <bbuil...@gmail.

Re: Spark Job on YARN accessing Hbase Table

2016-03-13 Thread Benjamin Kim
Hi Ted, I see that you’re working on the hbase-spark module for hbase. I recently packaged the SparkOnHBase project and gave it a test run. It works like a charm on CDH 5.4 and 5.5. All I had to do was add /opt/cloudera/parcels/CDH/jars/htrace-core-3.1.0-incubating.jar to the classpath.txt

Re: S3 Zip File Loading Advice

2016-03-09 Thread Benjamin Kim
ple files in each zip? Single file archives are processed just > like text as long as it is one of the supported compression formats. > > Regards > Sab > > On Wed, Mar 9, 2016 at 10:33 AM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: >

S3 Zip File Loading Advice

2016-03-08 Thread Benjamin Kim
I am wondering if anyone can help. Our company stores zipped CSV files in S3, which has been a big headache from the start. I was wondering if anyone has created a way to iterate through several subdirectories (s3n://events/2016/03/01/00, s3n://2016/03/01/01, etc.) in S3 to find the newest

Re: Steps to Run Spark Scala job from Oozie on EC2 Hadoop clsuter

2016-03-07 Thread Benjamin Kim
To comment… At my company, we have not gotten it to work in any other mode than local. If we try any of the yarn modes, it fails with a “file does not exist” error when trying to locate the executable jar. I mentioned this to the Hue users group, which we used for this, and they replied that

Hadoop 2.8 Release Data

2016-03-04 Thread Benjamin Kim
I have a general question about Hadoop 2.8. Is it being prepped for release anytime soon? I am awaiting HADOOP-5732 bringing SFTP support natively. Thanks, Ben - To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For

Re: SFTP Compressed CSV into Dataframe

2016-03-03 Thread Benjamin Kim
<sw...@snappydata.io> wrote: > > (-user) > > On Thursday 03 March 2016 10:09 PM, Benjamin Kim wrote: >> I forgot to mention that we will be scheduling this job using Oozie. So, we >> will not be able to know which worker node is going to being running this. >> If we try

Re: Building a REST Service with Spark back-end

2016-03-02 Thread Benjamin Kim
I want to ask about something related to this. Does anyone know if there is or will be a command line equivalent of spark-shell client for Livy Spark Server or any other Spark Job Server? The reason that I am asking spark-shell does not handle multiple users on the same server well. Since a

SFTP Compressed CSV into Dataframe

2016-03-02 Thread Benjamin Kim
I wonder if anyone has opened a SFTP connection to open a remote GZIP CSV file? I am able to download the file first locally using the SFTP Client in the spark-sftp package. Then, I load the file into a dataframe using the spark-csv package, which automatically decompresses the file. I just

Re: Spark on Kudu

2016-03-01 Thread Benjamin Kim
Hi J-D, Quick question… Is there an ETA for KUDU-1214? I want to target a version of Kudu to begin real testing of Spark against it for our devs. At least, I can tell them what timeframe to anticipate. Just curious, Benjamin Kim Data Solutions Architect [a•mo•bee] (n.) the company defining

Re: [DISCUSS] Update Roadmap

2016-03-01 Thread Benjamin Kim
I see in the Enterprise section that multi-tenancy will be included, will this have user impersonation too? In this way, the user executing will be the user owning the process. > On Mar 1, 2016, at 12:51 AM, Shabeel Syed wrote: > > +1 > > Hi Tamas, >Pluggable

Re: [DISCUSS] Update Roadmap

2016-02-29 Thread Benjamin Kim
I concur with this suggestion. In the enterprise, management would like to see scheduled runs to be tracked, monitored, and given SLA constraints for the mission critical. Alerts and notifications are crucial for DevOps to respond with error clarification within it. If the Zeppelin notebooks

Re: zeppelin multi user mode?

2016-02-26 Thread Benjamin Kim
You can use only the second one that Hyung Sung pointed out with any >> spark/zeppelin version. >> >> If you have further questions, please do not hesitate to ask at >> z-mana...@googlegroups.com <mailto:z-mana...@googlegroups.com> >> https://groups.google.com/f

Data Export

2016-02-26 Thread Benjamin Kim
I don’t know if I’m missing something, but is there a way to export the result data into a CSV, Excel, etc. from a SQL statement? Thanks, Ben

Re: zeppelin multi user mode?

2016-02-26 Thread Benjamin Kim
roups.com> > https://groups.google.com/forum/#!forum/z-manager > <https://groups.google.com/forum/#!forum/z-manager> > On Thu, Feb 4, 2016, 15:13 Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > I forgot to mention that I don’t see Spark

Re: Spark on Kudu

2016-02-24 Thread Benjamin Kim
UDU-1321 > <https://issues.cloudera.org/browse/KUDU-1321> > > It's a really simple wrapper, and yes you can use SparkSQL on Kudu, but it > will require a lot more work to make it fast/useful. > > Hope this helps, > > J-D > > On Wed, Feb 24, 2016 at 3:08

Spark on Kudu

2016-02-24 Thread Benjamin Kim
I see this KUDU-1214 targeted for 0.8.0, but I see no progress on it. When this is complete, will this mean that Spark will be able to work with Kudu both programmatically and as a client via Spark SQL? Or is there more work that needs to be done

Re: HBase Interpreter

2016-02-23 Thread Benjamin Kim
> On Wed, Feb 10, 2016 at 6:44 AM Felix Cheung <felixcheun...@hotmail.com > <mailto:felixcheun...@hotmail.com>> wrote: > It looks like hbase-site.xml is not picked up somehow. > > Rajat would you know of a way to get that set with the ruby code? > > >

Re: Kudu Release

2016-02-23 Thread Benjamin Kim
KzHfL2xcmKTScU-rhLcQFSns1UVSbrXhw%40mail.gmail.com%3E> > > Thanks, > > J-D > > On Tue, Feb 23, 2016 at 8:23 AM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Any word as to the release roadmap? > > Thanks, > Ben >

Re: Cloudera and Phoenix

2016-02-21 Thread Benjamin Kim
t; <mailto:dor.ben-...@amdocs.com>> wrote: > > Hi All, > > > > Do we have Phoenix release officially in Cloudera ? any plan to if not ? > > > > Regards, > > > > Dor ben Dov > > > > From: Benjamin Kim [mailto:bbuil...

Re: Spark Phoenix Plugin

2016-02-20 Thread Benjamin Kim
html> > [2] https://issues.apache.org/jira/browse/SPARK-1867 > <https://issues.apache.org/jira/browse/SPARK-1867> > > On Fri, Feb 19, 2016 at 2:18 PM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Hi Josh, > > Wh

Re: Spark Phoenix Plugin

2016-02-19 Thread Benjamin Kim
ing on it with and > haven't run into any problems: > https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark > <https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark> > > On Fri, Feb 19, 2016 at 12:40 PM, Benjamin Kim <bbuil...@gmail.com > &l

Re: Spark Phoenix Plugin

2016-02-19 Thread Benjamin Kim
4.7.0 using Spark 1.6? Thanks, Ben > On Feb 12, 2016, at 6:33 AM, Benjamin Kim <bbuil...@gmail.com> wrote: > > Anyone know when Phoenix 4.7 will be officially released? And what Cloudera > distribution versions will it be compatible with? > > Thanks, > Ben > &

Re: SparkOnHBase : Which version of Spark its available

2016-02-17 Thread Benjamin Kim
Ted, Any idea as to when this will be released? Thanks, Ben > On Feb 17, 2016, at 2:53 PM, Ted Yu wrote: > > The HBASE JIRA below is for HBase 2.0 > > HBase Spark module would be back ported to hbase 1.3.0 > > FYI > > On Feb 17, 2016, at 1:13 PM, Chandeep Singh

Re: Spark Phoenix Plugin

2016-02-12 Thread Benjamin Kim
Anyone know when Phoenix 4.7 will be officially released? And what Cloudera distribution versions will it be compatible with? Thanks, Ben > On Feb 10, 2016, at 11:03 AM, Benjamin Kim <bbuil...@gmail.com> wrote: > > Hi Pierre, > > I am getting this

Re: Spark Phoenix Plugin

2016-02-10 Thread Benjamin Kim
> Hi Pierre, > > Try your luck for building the artifacts from > https://github.com/chiastic-security/phoenix-for-cloudera > <https://github.com/chiastic-security/phoenix-for-cloudera>. Hopefully it > helps. > > Regards > Ravi . > > On Tue, Feb 9,

Re: spark 1.6.0 connect to hive metastore

2016-02-09 Thread Benjamin Kim
I got the same problem when I added the Phoenix plugin jar in the driver and executor extra classpaths. Do you have those set too? > On Feb 9, 2016, at 1:12 PM, Koert Kuipers wrote: > > yes its not using derby i think: i can see the tables in my actual hive > metastore. >

Re: Spark Phoenix Plugin

2016-02-09 Thread Benjamin Kim
gards > Ravi . > > On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Hi Pierre, > > I found this article about how Cloudera’s version of HBase is very different > than Apache HBase so it mus

Re: HBase Interpreter

2016-02-09 Thread Benjamin Kim
one option is to > 1. run & capture o/p of 'bin/hbase classpath' > 2. create a classloader > 3. load all the classes from 1 > > Then it will work with any version of HBase theoritically. > > > On Fri, Feb 5, 2016 at 8:14 AM Benjamin Kim <bbuil...@gmail.com > <mailto:bbu

Re: Spark Phoenix Plugin

2016-02-09 Thread Benjamin Kim
plugins/servlet/mobile#issue/SPARK-1867> > On Tue, 9 Feb 2016, 04:58 Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Pierre, > > I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I > get this error: >

Re: Spark Phoenix Plugin

2016-02-08 Thread Benjamin Kim
1-client-spark.jar > > > On Mon, 8 Feb 2016, 22:29 Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Hi Josh, > > I tried again by putting the settings within the spark-default.conf. > > spark.driver.extraClassP

Re: Spark Phoenix Plugin

2016-02-08 Thread Benjamin Kim
> > Pierre Lacave > 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland > Phone : +353879128708 > > On Fri, Feb 5, 2016 at 9:28 PM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Hi Pierre, > > When will I be able

Re: Spark Phoenix Plugin

2016-02-08 Thread Benjamin Kim
ou as well [2]. > > Good luck, > > Josh > > [1] https://phoenix.apache.org/phoenix_spark.html > <https://phoenix.apache.org/phoenix_spark.html> > [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark > <https://github.com/jmahonin/docker-phoenix/t

Re: Spark Phoenix Plugin

2016-02-05 Thread Benjamin Kim
Lacave* > 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland > Phone : +353879128708 > > On Fri, Feb 5, 2016 at 6:17 PM, Benjamin Kim <bbuil...@gmail.com > <javascript:_e(%7B%7D,'cvml','bbuil...@gmail.com');>> wrote: > >> I cannot get this p

Re: HBase Interpreter

2016-02-04 Thread Benjamin Kim
he long run, one option is to > 1. run & capture o/p of 'bin/hbase classpath' > 2. create a classloader > 3. load all the classes from 1 > > Then it will work with any version of HBase theoritically. > > > On Fri, Feb 5, 2016 at 8:14 AM Benjamin Kim <bbuil...@gmail.c

Re: HBase Interpreter

2016-02-04 Thread Benjamin Kim
- will loop you in for validating for sure if you'd like. > > > > _________ > From: Benjamin Kim <bbuil...@gmail.com > <javascript:_e(%7B%7D,'cvml','bbuil...@gmail.com');>> > Sent: Thursday, February 4, 2016 9:39 PM > Subject: Re: HBase Interpreter >

Re: zeppelin multi user mode?

2016-02-03 Thread Benjamin Kim
ubator-zeppelin#spark-interpreter > <https://github.com/apache/incubator-zeppelin#spark-interpreter> > > On Wed, Feb 3, 2016 at 9:47 PM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > I see that the latest version of Spark supported is 1.4.1.

Re: Is there a any plan to develop SPARK with c++??

2016-02-03 Thread Benjamin Kim
Hi DaeJin, The closest thing I can think of is this. https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html Cheers, Ben > On Feb 3, 2016, at 9:49 PM, DaeJin Jung wrote: > > hello everyone, > I have a short question. > > I would

Re: csv dependencies loaded in %spark but not %sql in spark 1.6/zeppelin 0.5.6

2016-02-03 Thread Benjamin Kim
Same here. I want to know the answer too. > On Feb 2, 2016, at 12:32 PM, Jonathan Kelly wrote: > > Hey, I just ran into that same exact issue yesterday and wasn't sure if I was > doing something wrong or what. Glad to know it's not just me! Unfortunately I > have not

Re: Spark with SAS

2016-02-03 Thread Benjamin Kim
You can download the Spark ODBC Driver. https://databricks.com/spark/odbc-driver-download > On Feb 3, 2016, at 10:09 AM, Jörn Franke wrote: > > This could be done through odbc. Keep in mind that you can run SaS jobs > directly on a Hadoop cluster using the SaS embedded

Re: csv dependencies loaded in %spark but not %sql in spark 1.6/zeppelin 0.5.6

2016-02-02 Thread Benjamin Kim
Same here. I want to know the answer too. > On Feb 2, 2016, at 12:32 PM, Jonathan Kelly wrote: > > Hey, I just ran into that same exact issue yesterday and wasn't sure if I was > doing something wrong or what. Glad to know it's not just me! Unfortunately I > have not

Re: [ANNOUNCE] New SAMBA Package = Spark + AWS Lambda

2016-02-02 Thread Benjamin Kim
Hi David, My company uses Lamba to do simple data moving and processing using python scripts. I can see using Spark instead for the data processing would make it into a real production level platform. Does this pave the way into replacing the need of a pre-instantiated cluster in AWS or bought

Re: Upgrade spark to 1.6.0

2016-02-01 Thread Benjamin Kim
Hi Felix, After installing Spark 1.6, I built Zeppelin using: mvn clean package -Pspark-1.6 -Dspark.version=1.6.0 -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo -DskipTests This worked for me. Cheers, Ben > On Feb 1, 2016, at 7:44 PM, Felix Cheung

Re: Spark SQL 1.5.2 missing JDBC driver for PostgreSQL?

2015-12-26 Thread Benjamin Kim
SPATH for this purpose, > but I couldn't get this to work for whatever reason, so i'm sticking to the > --jars approach used in my examples. > > On Tue, Dec 22, 2015 at 9:51 PM, Benjamin Kim <bbuil...@gmail.com > <mailto:bbuil...@gmail.com>> wrote: > Stephen, > >

Re: Spark SQL 1.5.2 missing JDBC driver for PostgreSQL?

2015-12-25 Thread Benjamin Kim
Spark Standalone per the spark.worker.cleanup.appDataTtl config param. > > The Spark SQL programming guide says to use SPARK_CLASSPATH for this purpose, > but I couldn't get this to work for whatever reason, so i'm sticking to the > --jars approach used in my examples. >

Re: Spark SQL 1.5.2 missing JDBC driver for PostgreSQL?

2015-12-22 Thread Benjamin Kim
Hi Stephen, I forgot to mention that I added these lines below to the spark-default.conf on the node with Spark SQL Thrift JDBC/ODBC Server running on it. Then, I restarted it. spark.driver.extraClassPath=/usr/share/java/postgresql-9.3-1104.jdbc41.jar

Re: Spark SQL 1.5.2 missing JDBC driver for PostgreSQL?

2015-12-22 Thread Benjamin Kim
rk. > > 2015-12-22 18:35 GMT-08:00 Benjamin Kim <bbuil...@gmail.com > <javascript:_e(%7B%7D,'cvml','bbuil...@gmail.com');>>: > >> Hi Stephen, >> >> I forgot to mention that I added these lines below to the >> spark-default.conf on the node with Spark

[jira] [Resolved] (RANGER-345) enable-agent.sh isn't a file

2015-04-05 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/RANGER-345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim resolved RANGER-345. - Resolution: Not A Problem I tried building Ranger at different network, and it works

[jira] [Created] (RANGER-345) enable-agent.sh isn't a file

2015-03-27 Thread Benjamin Kim (JIRA)
Benjamin Kim created RANGER-345: --- Summary: enable-agent.sh isn't a file Key: RANGER-345 URL: https://issues.apache.org/jira/browse/RANGER-345 Project: Ranger Issue Type: Bug

[jira] [Updated] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2014-04-21 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/MAPREDUCE-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated MAPREDUCE-4718: Target Version/s: 0.23.3, 1.0.3 (was: 1.0.3, 0.23.3, 2.0.0-alpha, 2.0.1-alpha

[jira] [Commented] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2014-04-20 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/MAPREDUCE-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975383#comment-13975383 ] Benjamin Kim commented on MAPREDUCE-4718: - Hi Chen I tested it with CDH4.5.0

[jira] [Created] (HIVE-6230) Hive UDAF with subquery runs all logic on reducers

2014-01-19 Thread Benjamin Kim (JIRA)
Benjamin Kim created HIVE-6230: -- Summary: Hive UDAF with subquery runs all logic on reducers Key: HIVE-6230 URL: https://issues.apache.org/jira/browse/HIVE-6230 Project: Hive Issue Type: Bug

Re: [b2g] ZTE Open constantly restarts.

2014-01-17 Thread benjamin . kim . nguyen
On Friday, November 22, 2013 4:41:21 PM UTC+1, Ernesto Acosta wrote: After installing the ROM of ZTE offers for Spain TME, my phone started to run weird. With version 1.1 of FirefoxOS, the touch did not work well at all, even at times had to lock / unlock the screen using the power key to

Re: [b2g] ZTE Open-- bricked when updating OS

2014-01-17 Thread benjamin . kim . nguyen
On Saturday, January 11, 2014 3:02:22 PM UTC+1, lgg2...@gmail.com wrote: Hello, Finally I have a working phone. All thanks to [paziusss] (http://forum.xda-developers.com/showpost.php?p=49329491postcount=40) or [pazos] (in spanish

Re: [b2g] ZTE Open-- bricked when updating OS

2014-01-17 Thread benjamin . kim . nguyen
Thank you for the answer :) I finally made it ! ___ dev-b2g mailing list dev-b2g@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-b2g

RE: Writing to HBase

2013-12-12 Thread Benjamin Kim
Subject: Re: Writing to HBase Here's a good place to start: http://mail-archives.apache.org/mod_mbox/incubator-spark-user/201311.mbox/%3ccacyzca3askwd-tujhqi1805bn7sctguaoruhd5xtxcsul1a...@mail.gmail.com%3E On 12/5/2013 10:18 AM, Benjamin Kim wrote

RE: write data into HBase via spark

2013-12-06 Thread Benjamin Kim
Hi Phillip/Hao, I was wondering if there is a simple working example out there that I can just run and see it work. Then, I can customize it for our needs. Unfortunately, this explanation still confuses me a little. Here is a little about the environment we are working with. We have Cloudera's

Writing to HBase

2013-12-05 Thread Benjamin Kim
Does anyone have an example or some sort of starting point code when writing from Spark Streaming into HBase? We currently stream ad server event log data using Flume-NG to tail log entries, collect them, and put them directly into a HBase table. We would like to do the same with Spark

Re: Decommissioning Nodes in Production Cluster.

2013-02-12 Thread Benjamin Kim
Hi, I would like to add another scenario. What are the steps for removing a dead node when the server had a hard failure that is unrecoverable. Thanks, Ben On Tuesday, February 12, 2013 7:30:57 AM UTC-8, sudhakara st wrote: The decommissioning process is controlled by an exclude file, which

[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6470: Attachment: SingleColumnValueFilter_HBASE_6470-trunk.patch SingleColumnValueFilter

[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6470: Status: Open (was: Patch Available) SingleColumnValueFilter with private fields and methods

[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-13 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496036#comment-13496036 ] Benjamin Kim commented on HBASE-6470: - oops I just did

[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-12 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6470: Fix Version/s: 0.96.0 Assignee: Benjamin Kim Release Note: Changes private fields

[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-12 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495291#comment-13495291 ] Benjamin Kim commented on HBASE-6470: - submitted a patch. just changed all private

[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-10-29 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486073#comment-13486073 ] Benjamin Kim commented on HBASE-6470: - I'll come back to this first thing in tomorrow

[jira] [Created] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder

2012-10-10 Thread Benjamin Kim (JIRA)
Benjamin Kim created MAPREDUCE-4718: --- Summary: MapReduce fails If I pass a parameter as a S3 folder Key: MAPREDUCE-4718 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4718 Project: Hadoop Map

[jira] [Created] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-07-28 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6470: --- Summary: SingleColumnValueFilter with private fields and methods Key: HBASE-6470 URL: https://issues.apache.org/jira/browse/HBASE-6470 Project: HBase Issue

[jira] [Updated] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-07-13 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6288: Attachment: HBASE-6288-trunk.patch HBASE-6288-94.patch HBASE-6288

[jira] [Commented] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-07-13 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13413921#comment-13413921 ] Benjamin Kim commented on HBASE-6288: - It took a while for being gone for a vacation

[jira] [Created] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-06-27 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6288: --- Summary: In hbase-daemons.sh, description of the default backup-master file path is wrong Key: HBASE-6288 URL: https://issues.apache.org/jira/browse/HBASE-6288 Project

[jira] [Updated] (HBASE-6288) In hbase-daemons.sh, description of the default backup-master file path is wrong

2012-06-27 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6288: Description: In hbase-daemons.sh, description of the default backup-master file path is wrong

[jira] [Updated] (HBASE-6132) ColumnCountGetFilter PageFilter not working with FilterList

2012-05-30 Thread Benjamin Kim (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-6132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6132: Description: Thanks to Anoop and Ramkrishna, here's what we found with FilterList If I use

[jira] [Created] (HBASE-6132) ColumnCountGetFilter PageFilter not working with FilterList

2012-05-29 Thread Benjamin Kim (JIRA)
Benjamin Kim created HBASE-6132: --- Summary: ColumnCountGetFilter PageFilter not working with FilterList Key: HBASE-6132 URL: https://issues.apache.org/jira/browse/HBASE-6132 Project: HBase

<    1   2   3