[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606170
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,140 @@
+?xml version=1.0 encoding=UTF-8?
+!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache License, Version 2.0
+ ~ (the License); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~  http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an AS IS BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ --
+ 
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ modelVersion4.0.0/modelVersion
+  parent
+   groupIdorg.apache.spark/groupId
+   artifactIdspark-parent/artifactId
+   version1.1.0-SNAPSHOT/version
+   relativePath../../pom.xml/relativePath
+  /parent
+
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-hbase_2.10/artifactId
--- End diff --

Pardon my ignorance, but what is the _2.10 about?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606202
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,140 @@
+?xml version=1.0 encoding=UTF-8?
+!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache License, Version 2.0
+ ~ (the License); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~  http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an AS IS BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ --
+ 
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ modelVersion4.0.0/modelVersion
+  parent
+   groupIdorg.apache.spark/groupId
+   artifactIdspark-parent/artifactId
+   version1.1.0-SNAPSHOT/version
+   relativePath../../pom.xml/relativePath
+  /parent
+
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-hbase_2.10/artifactId
+  properties
+sbt.project.namespark-hbase/sbt.project.name
+  /properties
+  packagingjar/packaging
+  nameSpark Project External Flume/name
--- End diff --

Is the mention of flume here intentional?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606346
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,140 @@
+?xml version=1.0 encoding=UTF-8?
+!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache License, Version 2.0
+ ~ (the License); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~  http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an AS IS BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ --
+ 
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ modelVersion4.0.0/modelVersion
+  parent
+   groupIdorg.apache.spark/groupId
+   artifactIdspark-parent/artifactId
+   version1.1.0-SNAPSHOT/version
+   relativePath../../pom.xml/relativePath
+  /parent
+
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-hbase_2.10/artifactId
+  properties
+sbt.project.namespark-hbase/sbt.project.name
+  /properties
+  packagingjar/packaging
+  nameSpark Project External Flume/name
+  urlhttp://spark.apache.org//url
+
+  dependencies
+dependency
+  groupIdorg.scalatest/groupId
+  artifactIdscalatest_${scala.binary.version}/artifactId
+/dependency
+dependency
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-core_${scala.binary.version}/artifactId
+  version${project.version}/version
+/dependency
+dependency
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-streaming_${scala.binary.version}/artifactId
+  version${project.version}/version
+  typetest-jar/type
+  classifiertests/classifier
+  scopetest/scope
+/dependency
+dependency
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-streaming_${scala.binary.version}/artifactId
+  version${project.version}/version
+/dependency
+dependency
+  groupIdorg.apache.hbase/groupId
+  artifactIdhbase-client/artifactId
+  version${hbase-new.version}/version
--- End diff --

What does the '-new' refer to?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606704
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
--- End diff --

This paragraph needs an edit


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread sryza
Github user sryza commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606710
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,140 @@
+?xml version=1.0 encoding=UTF-8?
+!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache License, Version 2.0
+ ~ (the License); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~  http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an AS IS BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ --
+ 
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ modelVersion4.0.0/modelVersion
+  parent
+   groupIdorg.apache.spark/groupId
+   artifactIdspark-parent/artifactId
+   version1.1.0-SNAPSHOT/version
+   relativePath../../pom.xml/relativePath
+  /parent
+
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-hbase_2.10/artifactId
--- End diff --

That's the Scala version


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16606905
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc Active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD  Original RDD with data to iterate over
+   * @param fFunction to be given a iterator to iterate through
+   * the RDD values and a HConnection object to interact 
+   * with HBase 
+   */
+  def foreachPartition[T](rdd: RDD[T],
+f: (Iterator[T], HConnection) = Unit) = {
+rdd.foreachPartition(
+  it = hbaseForeachPartition(broadcastedConf, it, f))
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16607037
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc Active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD  Original RDD with data to iterate over
+   * @param fFunction to be given a iterator to iterate through
+   * the RDD values and a HConnection object to interact 
+   * with HBase 
+   */
+  def foreachPartition[T](rdd: RDD[T],
+f: (Iterator[T], HConnection) = Unit) = {
+rdd.foreachPartition(
+  it = hbaseForeachPartition(broadcastedConf, it, f))
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16607208
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc Active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD  Original RDD with data to iterate over
+   * @param fFunction to be given a iterator to iterate through
+   * the RDD values and a HConnection object to interact 
+   * with HBase 
+   */
+  def foreachPartition[T](rdd: RDD[T],
+f: (Iterator[T], HConnection) = Unit) = {
+rdd.foreachPartition(
+  it = hbaseForeachPartition(broadcastedConf, it, f))
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16607538
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc Active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD  Original RDD with data to iterate over
+   * @param fFunction to be given a iterator to iterate through
+   * the RDD values and a HConnection object to interact 
+   * with HBase 
+   */
+  def foreachPartition[T](rdd: RDD[T],
+f: (Iterator[T], HConnection) = Unit) = {
+rdd.foreachPartition(
+  it = hbaseForeachPartition(broadcastedConf, it, f))
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16607576
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HConnectionStaticCache.scala
 ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.client.HConnection
+import java.util.concurrent.atomic.AtomicInteger
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.client.HConnectionManager
+import java.util.TimerTask
+import scala.collection.mutable.MutableList
+import org.apache.spark.Logging
+import scala.collection.mutable.SynchronizedMap
+import scala.collection.mutable.HashMap
+
+/**
+ * A static caching class that will manage all HConnection in a worker
--- End diff --

When you say 'caching', is it caching data fetched from HBase or caching 
HConnections?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16607705
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,140 @@
+?xml version=1.0 encoding=UTF-8?
+!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache License, Version 2.0
+ ~ (the License); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~  http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an AS IS BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ --
+ 
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ modelVersion4.0.0/modelVersion
+  parent
+   groupIdorg.apache.spark/groupId
+   artifactIdspark-parent/artifactId
+   version1.1.0-SNAPSHOT/version
+   relativePath../../pom.xml/relativePath
+  /parent
+
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-hbase_2.10/artifactId
--- End diff --

Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16608036
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/example/HBaseBulkDeleteExample.scala
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase.example
+
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.spark.hbase.HBaseContext
+import org.apache.hadoop.hbase.client.Delete
+
+object HBaseBulkDeleteExample {
--- End diff --

Class comment to state what this example does?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-22 Thread saintstack
Github user saintstack commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r16608329
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HConnectionStaticCache.scala
 ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.client.HConnection
+import java.util.concurrent.atomic.AtomicInteger
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.client.HConnectionManager
+import java.util.TimerTask
+import scala.collection.mutable.MutableList
+import org.apache.spark.Logging
+import scala.collection.mutable.SynchronizedMap
+import scala.collection.mutable.HashMap
+
+/**
+ * A static caching class that will manage all HConnection in a worker
+ *
+ * The main idea is there is a hashMap with
+ * HConstants.HBASE_CLIENT_INSTANCE_ID which is 
(hbase.client.instance.id)
+ *
+ * In that HashMap there is three things
+ *   - HConnection
+ *   - Number of checked out users of the HConnection
+ *   - Time since the HConnection was last used
+ *
+ * There is also a Timer thread that will start up every 2 minutes
+ * When the Timer thread starts up it will look for HConnection with no
+ * checked out users and a last used time that is older then 1 minute.
+ *
+ * This class is not intended to be used by Users
+ */
+object HConnectionStaticCache extends Logging {
+  @transient private val hconnectionMap =
+new HashMap[String, (HConnection, AtomicInteger, AtomicLong)] with 
SynchronizedMap[String, (HConnection, AtomicInteger, AtomicLong)]
+
+  @transient private val hconnectionTimeout = 6
+
+  @transient private val hconnectionCleaner = new Timer
+
+  hconnectionCleaner.schedule(new hconnectionCleanerTask, 
hconnectionTimeout * 2)
+
+  /**
+   * Gets or starts a HConnection based on a config object
+   */
+  def getHConnection(config: Configuration): HConnection = {
+val instanceId = config.get(HConstants.HBASE_CLIENT_INSTANCE_ID)
+var hconnectionAndCounter = 
hconnectionMap.get(instanceId).getOrElse(null)
+if (hconnectionAndCounter == null) {
+  hconnectionMap.synchronized { 
+hconnectionAndCounter = 
hconnectionMap.get(instanceId).getOrElse(null)
+if (hconnectionAndCounter == null) {
+
+  val hConnection = HConnectionManager.createConnection(config)
+  hconnectionAndCounter = (hConnection, new AtomicInteger, new 
AtomicLong)
+  hconnectionMap.put(instanceId, hconnectionAndCounter)
+
+}
+  }
+  logDebug(Created hConnection ' + instanceId + ');
+} else {
+  logDebug(Get hConnection from cache ' + instanceId + ');
+}
+hconnectionAndCounter._2.incrementAndGet()
+return hconnectionAndCounter._1
+  }
+
+  /**
+   * tell us a thread is no longer using a HConnection
+   */
+  def finishWithHConnection(config: Configuration, hconnection: 
HConnection) {
+val instanceId = config.get(HConstants.HBASE_CLIENT_INSTANCE_ID)
+
+var hconnectionAndCounter = 
hconnectionMap.get(instanceId).getOrElse(null)
+if (hconnectionAndCounter != null) {
+  var usesLeft = hconnectionAndCounter._2.decrementAndGet()
+  if (usesLeft  0) {
+hconnectionAndCounter._2.set(0)
+usesLeft = 0
+  }
+  if (usesLeft == 0) {
+hconnectionAndCounter._3.set(System.currentTimeMillis())
+logDebug(Finished last use of hconnection ' + instanceId + ');
+  } else {
+logDebug(Finished a use of hconnection ' + instanceId + ' with 
 + usesLeft +  uses left);
+  }
+} else {
+  logWarning(Tried to remove use of ' + instanceId + ' but nothing 
was 

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-01 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50910326
  
QA tests have started for PR 1608. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17682/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-01 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50914144
  
QA tests have started for PR 1608. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17684/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-01 Thread tmalaska
Github user tmalaska closed the pull request at:

https://github.com/apache/spark/pull/1608


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-01 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50915104
  
QA results for PR 1608:br- This patch FAILED unit tests.br- This patch 
merges cleanlybr- This patch adds the following public classes 
(experimental):br@serializable class HBaseContext(@transient sc: 
SparkContext,brprotected class hconnectionCleanerTask extends TimerTask 
{brbrFor more information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17682/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-08-01 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50919337
  
QA results for PR 1608:br- This patch FAILED unit tests.br- This patch 
merges cleanlybr- This patch adds the following public classes 
(experimental):br@serializable class HBaseContext(@transient sc: 
SparkContext,brprotected class hconnectionCleanerTask extends TimerTask 
{brbrFor more information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17684/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-30 Thread pwendell
Github user pwendell commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50693942
  
Just FYI - there is already an outstanding patch and JIRA for HBase support 
on Spark:
https://github.com/apache/spark/pull/194
https://issues.apache.org/jira/browse/SPARK-1127


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread rxin
Github user rxin commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50513421
  
Does HBase have some sort of schema information? If yes, maybe we can add 
it as a data source in SchemaRDD?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50541560
  
QA tests have started for PR 1608. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17382/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r1354
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
--- End diff --

And also group them together like 
`import java.io.{ByteArrayOutputStream, ByteArrayOutputStream, 
DataOutputStream}`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50546377
  
QA results for PR 1608:br- This patch FAILED unit tests.br- This patch 
merges cleanlybr- This patch adds the following public classes 
(experimental):br@serializable class HBaseContext(@transient sc: 
SparkContext,brprotected class hconnectionCleanerTask extends TimerTask 
{brbrFor more information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17382/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15557786
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
--- End diff --

Is there a need for this constructor? Better to do the following.

```
class HBaseContext(@transient sc: Sparkcontext, @transient config: 
Configuration) extends Serializable {
val broadcastConf = sc.broadcast(new SerializableWritable(config))
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-29 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15563085
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,538 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc  Active SparkContext
+ *  @param broadcastedConf This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc Active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
--- End diff --

Please take a look at other the scala docs of RDD transformation to 
understand the style. For example, a better way to explain this method would be 
to do the following. 
`Applies a function on each partition of the given RDD using a HBase 
connection`.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-28 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50420751
  
QA results for PR 1608:br- This patch PASSES unit tests.br- This patch 
merges cleanlybr- This patch adds the following public classes 
(experimental):br@serializable class HBaseContext(@transient sc: 
SparkContext,brprotected class hconnectionCleanerTask extends TimerTask 
{brbrFor more information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17316/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438411
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
--- End diff --

You shouldn't specify versions in the children poms. In fact, you can't, 
since this one in particular has to be overridden by `hadoop.version`. You can 
remove all `version` in this file. In fact there is already an 
`hbase.version` defined in the parent, which affects the code examples. You may 
also have to harmonize the examples with the newer HBase.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438415
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
+   exclusions
--- End diff --

For the same reason, don't add exclusions here. They should be in the 
parent. There should only be exclusions if there is a specific reason that this 
components needs different dependencies only in this module. The parent already 
gets hadoop exclusions correct over a range of versions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438420
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   /exclusion
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   typetest-jar/type
+   classifiertests/classifier
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-core_${scala.binary.version}/artifactId
+   version1.0.0/version
+   exclusions
+   exclusion
+   
groupIdorg.eclipse.jetty.orbit/groupId
+   artifactIdjavax.servlet/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+   typetest-jar/type
+classifiertests/classifier
+   scopetest/scope
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+
+   /dependency
+   dependency
+   groupIdorg.apache.hbase/groupId
+   artifactIdhbase-client/artifactId
+   version0.98.1-hadoop2/version
+   exclusions
+   exclusion
+   groupIdio.netty/groupId
+   artifactIdnetty/artifactId
+   /exclusion
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438418
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   /exclusion
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   typetest-jar/type
+   classifiertests/classifier
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-core_${scala.binary.version}/artifactId
+   version1.0.0/version
--- End diff --

${project.version}, not hard-coded to 1.0.0


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438426
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   /exclusion
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   typetest-jar/type
+   classifiertests/classifier
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-core_${scala.binary.version}/artifactId
+   version1.0.0/version
+   exclusions
+   exclusion
+   
groupIdorg.eclipse.jetty.orbit/groupId
+   artifactIdjavax.servlet/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+   typetest-jar/type
+classifiertests/classifier
+   scopetest/scope
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+
+   /dependency
+   dependency
+   groupIdorg.apache.hbase/groupId
+   artifactIdhbase-client/artifactId
+   version0.98.1-hadoop2/version
+   exclusions
+   exclusion
+   groupIdio.netty/groupId
+   artifactIdnetty/artifactId
+   /exclusion
+  

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438429
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
--- End diff --

I'd sort and separate these imports like you can see in other source files.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438438
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.hadoop.hbase.mapreduce.MutationSerialization
+import org.apache.hadoop.hbase.mapreduce.ResultSerialization
+import org.apache.hadoop.hbase.mapreduce.KeyValueSerialization
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import org.apache.hadoop.hbase.protobuf.generated.MasterProtos
+import org.apache.hadoop.hbase.protobuf.generated.MasterProtos
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc - active SparkContext
+ *  @param broadcastedConf - This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD[t]  Original RDD with data to iterate over

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438442
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.spark.rdd.RDD
+import java.io.ByteArrayOutputStream
+import java.io.DataOutputStream
+import java.io.ByteArrayInputStream
+import java.io.DataInputStream
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.HConnectionManager
+import org.apache.spark.api.java.JavaPairRDD
+import java.io.OutputStream
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Get
+import java.util.ArrayList
+import org.apache.hadoop.hbase.client.Result
+import scala.reflect.ClassTag
+import org.apache.hadoop.hbase.client.HConnection
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Increment
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.mapreduce.Job
+import org.apache.hadoop.hbase.mapreduce.TableMapper
+import org.apache.hadoop.hbase.mapreduce.IdentityTableMapper
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil
+import org.apache.hadoop.hbase.util.Base64
+import org.apache.hadoop.hbase.mapreduce.MutationSerialization
+import org.apache.hadoop.hbase.mapreduce.ResultSerialization
+import org.apache.hadoop.hbase.mapreduce.KeyValueSerialization
+import org.apache.spark.rdd.HadoopRDD
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.SerializableWritable
+import java.util.HashMap
+import org.apache.hadoop.hbase.protobuf.generated.MasterProtos
+import org.apache.hadoop.hbase.protobuf.generated.MasterProtos
+import java.util.concurrent.atomic.AtomicInteger
+import org.apache.hadoop.hbase.HConstants
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import java.util.TimerTask
+import org.apache.hadoop.hbase.client.Mutation
+import scala.collection.mutable.MutableList
+import org.apache.spark.streaming.dstream.DStream
+
+/**
+ * HBaseContext is a façade of simple and complex HBase operations
+ * like bulk put, get, increment, delete, and scan
+ *
+ * HBase Context will take the responsibilities to happen to
+ * complexity of disseminating the configuration information
+ * to the working and managing the life cycle of HConnections.
+ *
+ * First constructor:
+ *  @param sc - active SparkContext
+ *  @param broadcastedConf - This is a Broadcast object that holds a
+ * serializable Configuration object
+ *
+ */
+@serializable class HBaseContext(@transient sc: SparkContext,
+  broadcastedConf: Broadcast[SerializableWritable[Configuration]]) {
+
+  /**
+   * Second constructor option:
+   *  @param sc active SparkContext
+   *  @param config Configuration object to make connection to HBase
+   */
+  def this(@transient sc: SparkContext, @transient config: Configuration) {
+this(sc, sc.broadcast(new SerializableWritable(config)))
+  }
+
+  /**
+   * A simple enrichment of the traditional Spark RDD foreachPartition.
+   * This function differs from the original in that it offers the 
+   * developer access to a already connected HConnection object
+   * 
+   * Note: Do not close the HConnection object.  All HConnection
+   * management is handled outside this method
+   * 
+   * @param RDD[t]  Original RDD with data to iterate over

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438453
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HConnectionStaticCache.scala
 ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import java.util.HashMap
+import org.apache.hadoop.hbase.client.HConnection
+import java.util.concurrent.atomic.AtomicInteger
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.client.HConnectionManager
+import java.util.TimerTask
+import scala.collection.mutable.MutableList
+import org.apache.spark.Logging
+
+/**
+ * A static caching class that will manage all HConnection in a worker
+ * 
+ * The main idea is there is a hashMap with 
+ * HConstants.HBASE_CLIENT_INSTANCE_ID which is 
(hbase.client.instance.id)
+ * 
+ * In that HashMap there is three things
+ *   - HConnection
+ *   - Number of checked out users of the HConnection
+ *   - Time since the HConnection was last used
+ *   
+ * There is also a Timer thread that will start up every 2 minutes
+ * When the Timer thread starts up it will look for HConnection with no
+ * checked out users and a last used time that is older then 1 minute.
+ * 
+ * This class is not intended to be used by Users
+ */
+object HConnectionStaticCache extends Logging{
+  @transient private val hconnectionMap = 
+new HashMap[String, (HConnection, AtomicInteger, AtomicLong)]
+
+  @transient private val hconnectionTimeout = 6
+
+  @transient private val hconnectionCleaner = new Timer
+
+  hconnectionCleaner.schedule(new hconnectionCleanerTask, 
hconnectionTimeout * 2)
+
+  /**
+   * Gets or starts a HConnection based on a config object
+   */
+  def getHConnection(config: Configuration): HConnection = {
+val instanceId = config.get(HConstants.HBASE_CLIENT_INSTANCE_ID)
+var hconnectionAndCounter = hconnectionMap.get(instanceId)
--- End diff --

`hconnectionMap` is accessed without synchronization here, when it may be 
being mutated. This could fail. Is this trying to implement double-checked 
locking? How about just a synchronized Map implementation and a call to 
`getOrElse`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438461
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/HConnectionStaticCache.scala
 ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import java.util.HashMap
+import org.apache.hadoop.hbase.client.HConnection
+import java.util.concurrent.atomic.AtomicInteger
+import java.util.concurrent.atomic.AtomicLong
+import java.util.Timer
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.client.HConnectionManager
+import java.util.TimerTask
+import scala.collection.mutable.MutableList
+import org.apache.spark.Logging
+
+/**
+ * A static caching class that will manage all HConnection in a worker
+ * 
+ * The main idea is there is a hashMap with 
+ * HConstants.HBASE_CLIENT_INSTANCE_ID which is 
(hbase.client.instance.id)
+ * 
+ * In that HashMap there is three things
+ *   - HConnection
+ *   - Number of checked out users of the HConnection
+ *   - Time since the HConnection was last used
+ *   
+ * There is also a Timer thread that will start up every 2 minutes
+ * When the Timer thread starts up it will look for HConnection with no
+ * checked out users and a last used time that is older then 1 minute.
+ * 
+ * This class is not intended to be used by Users
+ */
+object HConnectionStaticCache extends Logging{
+  @transient private val hconnectionMap = 
+new HashMap[String, (HConnection, AtomicInteger, AtomicLong)]
--- End diff --

This is using a lot of the Java collection API instead of Scala API. I 
think the latter would be more standard. In fact, it would help in a number of 
cases, like the one in the next comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438469
  
--- Diff: 
external/hbase/src/main/scala/org/apache/spark/hbase/example/HBaseStreamingBulkPutExample.scala
 ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase.example
+
+import org.apache.spark.SparkContext
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.client.Put
+import org.apache.spark.hbase.HBaseContext
+import org.apache.spark.streaming.StreamingContext
+import org.apache.spark.streaming.Seconds
+import org.apache.spark.SparkConf
+
+object HBaseStreamingBulkPutExample {
+  def main(args: Array[String]) {
+if (args.length == 0) {
+System.out.println(HBaseStreamingBulkPutExample {master} {host} 
{port} {tableName} {columnFamily});
+return;
+  }
+  
+  val master = args(0);
+  val host = args(1);
+  val port = args(2);
+  val tableName = args(3);
+  val columnFamily = args(4);
+  
+  System.out.println(master: + master)
+  System.out.println(host: + host)
+  System.out.println(port: + Integer.parseInt(port))
--- End diff --

Another example: in Scala this is just `port.toInt`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438472
  
--- Diff: 
external/hbase/src/test/scala/org/apache/spark/hbase/HBaseContextSuite.scala ---
@@ -0,0 +1,296 @@
+package org.apache.spark.hbase
--- End diff --

Missed a copyright header


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-27 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15438477
  
--- Diff: 
external/hbase/src/test/scala/org/apache/spark/hbase/LocalSparkContext.scala ---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.hbase
+
+import _root_.io.netty.util.internal.logging.{Slf4JLoggerFactory, 
InternalLoggerFactory}
+import org.scalatest.BeforeAndAfterAll
+import org.scalatest.BeforeAndAfterEach
+import org.scalatest.Suite
+import org.apache.spark.SparkContext
+
+/** Manages a local `sc` {@link SparkContext} variable, correctly stopping 
it after each test. */
+trait LocalSparkContext extends BeforeAndAfterEach with BeforeAndAfterAll 
{ self: Suite =
+
+  @transient var sc: SparkContext = _
+
+  override def beforeAll() {
+InternalLoggerFactory.setDefaultFactory(new Slf4JLoggerFactory())
--- End diff --

Curious what this is about?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread tmalaska
GitHub user tmalaska opened a pull request:

https://github.com/apache/spark/pull/1608

Spark-2447 : Spark on HBase

Add common solution for sending upsert actions to HBase (put, deletes,
and increment)

This is the first pull request: mainly to test the review process, but 
there are still a number of things that I plan to add this week.

1. Clean up the pom file
2. Add unit tests for the HConnectionStaticCache

If I have time I will also add the following:
1. Support for Java
2. Additional unit tests for Java
3. Additional unit tests for Spark Streaming

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tmalaska/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/1608.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1608


commit 6d9c733d4f177292cfc2fda15a6059660bd500f3
Author: tmalaska ted.mala...@cloudera.com
Date:   2014-07-27T03:17:06Z

Spark-2447 : Spark on HBase

Add common solution for sending upsert actions to HBase (put, deletes,
and increment)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread witgo
Github user witgo commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15437166
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
--- End diff --

Two spaces for indentation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50254637
  
QA results for PR 1608:br- This patch FAILED unit tests.br- This patch 
merges cleanlybr- This patch adds the following public classes 
(experimental):br@serializable class HBaseContext(@transient sc: 
SparkContext,brprotected class hconnectionCleanerTask extends TimerTask 
{brbrFor more information see test 
ouptut:brhttps://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17237/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1608#issuecomment-50254635
  
QA tests have started for PR 1608. This patch merges cleanly. brView 
progress: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17237/consoleFull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread witgo
Github user witgo commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15437173
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+  modelVersion4.0.0/modelVersion
+parent
+  groupIdorg.apache.spark/groupId
+  artifactIdspark-parent/artifactId
+  version1.1.0-SNAPSHOT/version
+  relativePath../../pom.xml/relativePath
+/parent
+
+groupIdorg.apache.spark/groupId
+artifactIdspark-hbase_2.10/artifactId
+properties
+   sbt.project.namespark-hbase/sbt.project.name
+/properties
+packagingjar/packaging
+nameSpark Project External Flume/name
+urlhttp://spark.apache.org//url
+
+   dependencies
+   dependency
+   groupIdorg.scalatest/groupId
+   
artifactIdscalatest_${scala.binary.version}/artifactId
+   version2.2.0/version
+   /dependency
+
+   dependency
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   version14.0.1/version
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-client/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdcom.google.guava/groupId
+   artifactIdguava/artifactId
+   /exclusion
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   version2.3.0/version
+   typetest-jar/type
+   classifiertests/classifier
+   exclusions
+   exclusion
+   groupIdjavax.servlet/groupId
+   artifactIdservlet-api/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-core_${scala.binary.version}/artifactId
+   version1.0.0/version
+   exclusions
+   exclusion
+   
groupIdorg.eclipse.jetty.orbit/groupId
+   artifactIdjavax.servlet/artifactId
+   /exclusion
+   /exclusions
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+   typetest-jar/type
+classifiertests/classifier
+   scopetest/scope
+   /dependency
+   dependency
+   groupIdorg.apache.spark/groupId
+   
artifactIdspark-streaming_${scala.binary.version}/artifactId
+   version1.0.0/version
+
+   /dependency
+   dependency
+   groupIdorg.apache.hbase/groupId
+   artifactIdhbase-client/artifactId
+   version0.98.1-hadoop2/version
+   exclusions
+   exclusion
+   groupIdio.netty/groupId
+   artifactIdnetty/artifactId
+   /exclusion
+   

[GitHub] spark pull request: Spark-2447 : Spark on HBase

2014-07-26 Thread witgo
Github user witgo commented on a diff in the pull request:

https://github.com/apache/spark/pull/1608#discussion_r15437185
  
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
--- End diff --

Add Apache license headers


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---