Re: Spark + Zeppelin on EC2

2016-03-25 Thread moon soo Lee
Hi,

Spark workers will open connections to access Spark driver (SparkContext),
which is running on Zeppelin instance. So make sure your network
configuration (firewall, routing table, etc) allows workers connect to
Zeppelin instance.

Could you verify spark shell command, not on the master node, but the same
node that Zeppelin running? If that works, Zeppelin should works, too.

Thanks,
moon

On Fri, Mar 25, 2016 at 3:31 PM Chris Miller  wrote:

> Curious about this too... I'll be moving Zeppelin off to its own box in
> the near future. If you figure this out, post your resolution here.
>
> --
> Chris Miller
>
> On Sat, Mar 26, 2016 at 12:54 AM, Marcin Pilarczyk <
> marcin.pilarc...@interia.pl> wrote:
>
>> Guys,
>>
>> I'm trying to switch my zeppelin instance (0.6 snapshot) from the spark
>> instance installed on very same machine onto ec2 created spark. Both
>> versions of spark are 1.5.2.
>>
>> I've just created a test instance in EC2, I can submit jobs or use spark
>> shell. I have revised logs each and every worker up and running, master
>> alive. So far so good.
>>
>> The next step is to switch the zeppelin on the newly created spark. I'm
>> changing two places: zeppelin-env.sh and URL in the interpreter settings.
>> I'm SURE these settings are pointing the new instance.
>>
>> Next step. I'm stopping the spark instance installed together with
>> zeppelin.
>>
>> Final step, zeppelin is restarted, settings are checked. Somehow any
>> paragraph that requires computations can't be completed. Master logs are
>> ok, in the slave log however I can find a following error:
>>
>> 16/03/25 12:42:25 INFO Remoting: Starting remoting
>> 16/03/25 12:42:25 INFO Remoting: Remoting started; listening on addresses
>> :[akka.tcp://driverPropsFetcher@172.31.40.27:36098]
>> 16/03/25 12:42:25 INFO util.Utils: Successfully started service
>> 'driverPropsFetcher' on port 36098.
>> 16/03/25 12:43:28 WARN Remoting: Tried to associate with unreachable
>> remote address [akka.tcp://sparkDriver@172.31.41.186:46358]. Address is
>> now gated for 5000 ms, all messages to this address will be delivered to
>> dead letters.$
>> Exception in thread "main" akka.actor.ActorNotFound: Actor not found for:
>> ActorSelection[Anchor(akka.tcp://sparkDriver@172.31.41.186:46358/),
>> Path(/user/CoarseGrainedScheduler)]
>> at
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
>> at
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
>> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>> at
>> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>> at
>> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>> at
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>> at
>> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
>> at
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>> at
>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>> at
>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
>> at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
>> at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
>> at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
>> at
>> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
>> at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
>> at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
>> at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
>> at
>> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
>> at
>> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
>> at akka.actor.ActorCell.terminate(ActorCell.scala:369)
>> at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
>> at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
>> at
>> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
>> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>> at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>> at
>> scala

Re: Importaing Hbase data

2016-03-25 Thread Randy Gelhausen
You can put an Phoenix view  on top
of your existing HBase table, then use Phoenix's Spark module
 to read the table into a
dataframe.

On Fri, Mar 25, 2016 at 12:44 PM, Benjamin Kim  wrote:

> The hbase-spark module is still a work in progress in terms of Spark SQL.
> All the RDD methods are complete and ready to use against the current
> version of HBase 1.0+, but the use of DataFrames will require the
> unreleased version of HBase 2.0. Fortunately, there is work in progress to
> back-port the hbase-spark module to not have these deep rooted dependencies
> on HBase 2.0 (HBASE-14160). For more information on this, you can refer
> to
> http://blog.cloudera.com/blog/2015/08/apache-spark-comes-to-apache-hbase-with-hbase-spark-module/
> to see what they are trying to accomplish.
>
> On Mar 25, 2016, at 9:17 AM, Silvio Fiorito 
> wrote:
>
> There’s also this, which seems more current:
> https://github.com/apache/hbase/tree/master/hbase-spark
>
> I haven’t used it, but I know Ted Malaska and others from Cloudera have
> worked heavily on it.
>
> From: Felix Cheung 
> Reply-To: "users@zeppelin.incubator.apache.org" <
> users@zeppelin.incubator.apache.org>
> Date: Friday, March 25, 2016 at 12:01 PM
> To: "users@zeppelin.incubator.apache.org" <
> users@zeppelin.incubator.apache.org>, "users@zeppelin.incubator.apache.org"
> 
> Subject: Re: Importaing Hbase data
>
> You should be able to access that from Spark SQL through a package like
> http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase
>
> This package seems like have not been updated for a while though.
>
>
>
> On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada" <
> kumiko.y...@ds-iq.com> wrote:
>
> Hello,
>
>
> Is there a way to importing Hbase data to the Zeppelin notebook using the
> Spark SQL?
>
>
> Thanks
> Kumiko
>
>
>


Re: Multi-User Zeppelin Deployment?

2016-03-25 Thread Chris Miller
Thank you for your detailed reply!


--
Chris Miller

On Thu, Mar 10, 2016 at 1:40 AM, moon soo Lee  wrote:

> Hi Chris Miller,
>
>
>- If one user is running a job with an interpreter, can another user
>simultaneously run a job (such as, in another notebook) with the same
>interpreter?
>
> Short answer is yes, but it depends.
> Long answer is, first, it depends on which scheduler interpreter
> implementation uses among FIFO and Parallel. Interpreter uses Parallel
> scheduler (e.g. spark sql interpreter, shell interpreter, etc) will able to
> run simultaneously. Interpreter uses FIFO scheduler will not (e.g. spark
> interpreter, etc)
>
> Recently, http://issues.apache.org/jira/browse/ZEPPELIN-513 is resolved.
> Which allows Interpreter use FIFO scheduler run simultaneously, by creating
> interpreter instance per notebook.
>
>
>- Does Zeppelin have any kind of user authentication capabilities?
>
>
> Master branch has authentication capabilities based on Apache Shiro.
> https://issues.apache.org/jira/browse/ZEPPELIN-548
>
>
>
>- Can I give users either read-only or no access to particular
>notebooks but also let users create their own notebooks that only they can
>see?
>
> Recently https://github.com/apache/incubator-zeppelin/pull/681 has been
> merged to master branch. It lets users adjust read/write/execute permission
> for each notebook.
>
>
>
>- Can I have jobs run as the logged-in user rather than a generic
>"zeppelin" user so things like HDFS permissions will apply?
>
>
> User impersonation is on the roadmap
> https://cwiki.apache.org/confluence/display/ZEPPELIN/Zeppelin+Roadmap,
> but not working in progress, yet.
>
> Hope this helps.
>
> Thanks,
> moon
>
>
> On Wed, Mar 9, 2016 at 1:08 AM Chris Miller 
> wrote:
>
>> Hi,
>>
>> I want to deploy Zeppelin so that multiple users in our organization can
>> use it concurrently. I have a few questions:
>>
>>- If one user is running a job with an interpreter, can another user
>>simultaneously run a job (such as, in another notebook) with the same
>>interpreter?
>>- Does Zeppelin have any kind of user authentication capabilities?
>>- Can I give users either read-only or no access to particular
>>notebooks but also let users create their own notebooks that only they can
>>see?
>>- Can I have jobs run as the logged-in user rather than a generic
>>"zeppelin" user so things like HDFS permissions will apply?
>>
>> For those of you using Zeppelin in production, any other deployment or
>> configuration tips?
>> --
>> Chris Miller
>>
>


Re: Spark + Zeppelin on EC2

2016-03-25 Thread Chris Miller
Curious about this too... I'll be moving Zeppelin off to its own box in the
near future. If you figure this out, post your resolution here.

--
Chris Miller

On Sat, Mar 26, 2016 at 12:54 AM, Marcin Pilarczyk <
marcin.pilarc...@interia.pl> wrote:

> Guys,
>
> I'm trying to switch my zeppelin instance (0.6 snapshot) from the spark
> instance installed on very same machine onto ec2 created spark. Both
> versions of spark are 1.5.2.
>
> I've just created a test instance in EC2, I can submit jobs or use spark
> shell. I have revised logs each and every worker up and running, master
> alive. So far so good.
>
> The next step is to switch the zeppelin on the newly created spark. I'm
> changing two places: zeppelin-env.sh and URL in the interpreter settings.
> I'm SURE these settings are pointing the new instance.
>
> Next step. I'm stopping the spark instance installed together with
> zeppelin.
>
> Final step, zeppelin is restarted, settings are checked. Somehow any
> paragraph that requires computations can't be completed. Master logs are
> ok, in the slave log however I can find a following error:
>
> 16/03/25 12:42:25 INFO Remoting: Starting remoting
> 16/03/25 12:42:25 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://driverPropsFetcher@172.31.40.27:36098]
> 16/03/25 12:42:25 INFO util.Utils: Successfully started service
> 'driverPropsFetcher' on port 36098.
> 16/03/25 12:43:28 WARN Remoting: Tried to associate with unreachable
> remote address [akka.tcp://sparkDriver@172.31.41.186:46358]. Address is
> now gated for 5000 ms, all messages to this address will be delivered to
> dead letters.$
> Exception in thread "main" akka.actor.ActorNotFound: Actor not found for:
> ActorSelection[Anchor(akka.tcp://sparkDriver@172.31.41.186:46358/),
> Path(/user/CoarseGrainedScheduler)]
> at
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
> at
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at
> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
> at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
> at
> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
> at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
> at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
> at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
> at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
> at
> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
> at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
> at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
> at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
> at
> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
> at
> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
> at akka.actor.ActorCell.terminate(ActorCell.scala:369)
> at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
> at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
> at
> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> 172.31.41.186 -> that's the address where zeppelin is running and previous
> spark WAS running, In the zeppelin configuration there is no trace of this
> IP. Please note again, spark shell and submit 

Spark + Zeppelin on EC2

2016-03-25 Thread Marcin Pilarczyk
Guys,

I'm trying to switch my zeppelin instance (0.6 snapshot) from the spark
instance installed on very same machine onto ec2 created spark. Both
versions of spark are 1.5.2.

I've just created a test instance in EC2, I can submit jobs or use spark
shell. I have revised logs each and every worker up and running, master
alive. So far so good.

The next step is to switch the zeppelin on the newly created spark. I'm
changing two places: zeppelin-env.sh and URL in the interpreter settings.
I'm SURE these settings are pointing the new instance.

Next step. I'm stopping the spark instance installed together with zeppelin.

Final step, zeppelin is restarted, settings are checked. Somehow any
paragraph that requires computations can't be completed. Master logs are
ok, in the slave log however I can find a following error:

16/03/25 12:42:25 INFO Remoting: Starting remoting
16/03/25 12:42:25 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://driverPropsFetcher@172.31.40.27:36098]
16/03/25 12:42:25 INFO util.Utils: Successfully started service
'driverPropsFetcher' on port 36098.
16/03/25 12:43:28 WARN Remoting: Tried to associate with unreachable remote
address [akka.tcp://sparkDriver@172.31.41.186:46358]. Address is now gated
for 5000 ms, all messages to this address will be delivered to dead
letters.$
Exception in thread "main" akka.actor.ActorNotFound: Actor not found for:
ActorSelection[Anchor(akka.tcp://sparkDriver@172.31.41.186:46358/),
Path(/user/CoarseGrainedScheduler)]
at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at
akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
at
akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
at
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
at
akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
at
akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
at
akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
at akka.actor.ActorCell.terminate(ActorCell.scala:369)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

172.31.41.186 -> that's the address where zeppelin is running and previous
spark WAS running, In the zeppelin configuration there is no trace of this
IP. Please note again, spark shell and submit on the new master node are
executing their jobs.

Is there yet another place I need to change? Or generally, is there any
problem with running zeppelin with remote/external spark?

Regards,
Marcin


Re: Importaing Hbase data

2016-03-25 Thread Benjamin Kim
The hbase-spark module is still a work in progress in terms of Spark SQL. All 
the RDD methods are complete and ready to use against the current version of 
HBase 1.0+, but the use of DataFrames will require the unreleased version of 
HBase 2.0. Fortunately, there is work in progress to back-port the hbase-spark 
module to not have these deep rooted dependencies on HBase 2.0 (HBASE-14160). 
For more information on this, you can refer to 
http://blog.cloudera.com/blog/2015/08/apache-spark-comes-to-apache-hbase-with-hbase-spark-module/
 to see what they are trying to accomplish.

> On Mar 25, 2016, at 9:17 AM, Silvio Fiorito  
> wrote:
> 
> There’s also this, which seems more current: 
> https://github.com/apache/hbase/tree/master/hbase-spark 
> 
> 
> I haven’t used it, but I know Ted Malaska and others from Cloudera have 
> worked heavily on it.
> 
> From: Felix Cheung  >
> Reply-To: "users@zeppelin.incubator.apache.org 
> " 
>  >
> Date: Friday, March 25, 2016 at 12:01 PM
> To: "users@zeppelin.incubator.apache.org 
> " 
>  >, 
> "users@zeppelin.incubator.apache.org 
> " 
>  >
> Subject: Re: Importaing Hbase data
> 
> You should be able to access that from Spark SQL through a package like 
> http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase 
> 
> 
> This package seems like have not been updated for a while though.
> 
> 
> 
> On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada"  > wrote:
> 
> Hello,
>  
> Is there a way to importing Hbase data to the Zeppelin notebook using the 
> Spark SQL?
>  
> Thanks
> Kumiko



Re: Importaing Hbase data

2016-03-25 Thread Silvio Fiorito
There’s also this, which seems more current: 
https://github.com/apache/hbase/tree/master/hbase-spark

I haven’t used it, but I know Ted Malaska and others from Cloudera have worked 
heavily on it.

From: Felix Cheung mailto:felixcheun...@hotmail.com>>
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
mailto:users@zeppelin.incubator.apache.org>>
Date: Friday, March 25, 2016 at 12:01 PM
To: 
"users@zeppelin.incubator.apache.org"
 
mailto:users@zeppelin.incubator.apache.org>>,
 
"users@zeppelin.incubator.apache.org"
 
mailto:users@zeppelin.incubator.apache.org>>
Subject: Re: Importaing Hbase data

You should be able to access that from Spark SQL through a package like 
http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase

This package seems like have not been updated for a while though.



On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada" 
mailto:kumiko.y...@ds-iq.com>> wrote:


Hello,



Is there a way to importing Hbase data to the Zeppelin notebook using the Spark 
SQL?



Thanks

Kumiko


Re: Importaing Hbase data

2016-03-25 Thread Felix Cheung
You should be able to access that from Spark SQL through a package like 
http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase

This package seems like have not been updated for a while though.



On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada"  
wrote:





Hello,

Is there a way to importing Hbase data to the Zeppelin notebook using the Spark 
SQL?

Thanks
Kumiko