Re: FileNotFoundException, while file is actually available

2017-02-06 Thread censj
If you deploy yarn model,you can used yarn logs -applicationId youApplicationId 
get yarn logs. You can get logs in details。
Then Looking error info,
===
Name: cen sujun
Mobile: 13067874572
Mail: ce...@lotuseed.com

> 在 2017年2月6日,05:33,Evgenii Morozov  写道:
> 
> Hi, 
> 
> I see a lot of exceptions like the following during our machine learning 
> pipeline calculation. Spark version 2.0.2.
> Sometimes it’s just few executors that fails with this message, but the job 
> is successful. 
> 
> I’d appreciate any hint you might have.
> Thank you.
> 
> 2017-02-05 07:56:47.022 [task-result-getter-1] WARN  
> o.a.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 151558.0 (TID 
> 993070, 10.61.12.43):
> java.io.FileNotFoundException: File file:/path/to/file does not exist
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
>at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
>at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:142)
>at 
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346)
>at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
>at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
>at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:245)
>at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>at org.apache.spark.scheduler.Task.run(Task.scala:86)
>at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> 
> 
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> 



spark run shell On yarn

2016-07-28 Thread censj
16/07/28 17:07:34 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
  at 
org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:45)
  at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:163)
  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
  at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:150)
  at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
  at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
  at org.apache.spark.SparkContext.(SparkContext.scala:500)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
  at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
  at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
  at scala.Option.getOrElse(Option.scala:121)
  at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
  ... 47 elided
Caused by: java.lang.ClassNotFoundException: 
com.sun.jersey.api.client.config.ClientConfig
  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  ... 60 more
:14: error: not found: value spark
   import spark.implicits._
  ^
:14: error: not found: value spark
   import spark.sql
  ^
Welcome to




hi:
I use spark 2.0,but when I run  
"/etc/spark-2.0.0-bin-hadoop2.6/bin/spark-shell --master yarn” , appear this 
Error.

/etc/spark-2.0.0-bin-hadoop2.6/bin/spark-submit
export YARN_CONF_DIR=/etc/hadoop/conf
export HADOOP_CONF_DIR=/etc/hadoop/conf
export SPARK_HOME=/etc/spark-2.0.0-bin-hadoop2.6


how I to update?





===
Name: cen sujun
Mobile: 13067874572
Mail: ce...@lotuseed.com



Re: Newbie question

2016-01-07 Thread censj
You can try it.  
> 在 2016年1月8日,14:44,yuliya Feldman  写道:
> 
> invoked



About Huawei-Spark/Spark-SQL-on-HBase

2015-12-19 Thread censj
I use Huawei-Spark/Spark-SQL-on-HBase,but running ./bin/hbase-sql throwing.
5/12/19 16:59:34 INFO storage.BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoSuchMethodError: 
jline.Terminal.getTerminal()Ljline/Terminal;
at jline.ConsoleReader.(ConsoleReader.java:191)
at jline.ConsoleReader.(ConsoleReader.java:186)
at jline.ConsoleReader.(ConsoleReader.java:174)
at 
org.apache.spark.sql.hbase.HBaseSQLCliDriver$.main(HBaseSQLCliDriver.scala:55)
at 
org.apache.spark.sql.hbase.HBaseSQLCliDriver.main(HBaseSQLCliDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
15/12/19 16:59:35 INFO spark.SparkContext: Invoking stop() from shutdown hook
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: About Huawei-Spark/Spark-SQL-on-HBase

2015-12-19 Thread censj
ok! But I thank jline version error. I found pom.xml jine 
0.9.94. 

> 在 2015年12月19日,17:29,Ravindra Pesala <ravispark.pes...@gmail.com> 写道:
> 
> Hi censj,
> 
> Please try the new repo at https://github.com/HuaweiBigData/astro 
> <https://github.com/HuaweiBigData/astro> , not maintaining the old repo. 
> Please let me know if you still get the error. You can contact on my personal 
> mail also at ravi.pes...@gmail.com <mailto:ravi.pes...@gmail.com>
> Thanks,
> Ravindra.
> 
> On Sat 19 Dec, 2015 2:45 pm censj <ce...@lotuseed.com 
> <mailto:ce...@lotuseed.com>> wrote:
> I use Huawei-Spark/Spark-SQL-on-HBase,but running ./bin/hbase-sql throwing.
> 5/12/19 16:59:34 INFO storage.BlockManagerMaster: Registered BlockManager
> Exception in thread "main" java.lang.NoSuchMethodError: 
> jline.Terminal.getTerminal()Ljline/Terminal;
> at jline.ConsoleReader.(ConsoleReader.java:191)
> at jline.ConsoleReader.(ConsoleReader.java:186)
> at jline.ConsoleReader.(ConsoleReader.java:174)
> at 
> org.apache.spark.sql.hbase.HBaseSQLCliDriver$.main(HBaseSQLCliDriver.scala:55)
> at 
> org.apache.spark.sql.hbase.HBaseSQLCliDriver.main(HBaseSQLCliDriver.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
> at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 15/12/19 16:59:35 INFO spark.SparkContext: Invoking stop() from shutdown hook
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> <mailto:user-unsubscr...@spark.apache.org>
> For additional commands, e-mail: user-h...@spark.apache.org 
> <mailto:user-h...@spark.apache.org>
> 



Base ERROR

2015-12-17 Thread censj
hi,all:
I wirte data to hbase,but Hbase arise this ERROR,Could you help me?
> 
> r.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired 
> for /hbase-unsecure/rs/byd0157,16020,1449106975377
> 2015-12-17 21:24:29,854 WARN  [regionserver/byd0157/192.168.0.157:16020] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=byd0151:2181,byd0150:2181,byd0152:2181, 
> exception=org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired for 
> /hbase-unsecure/rs/byd0157,16020,1449106975377
> 2015-12-17 21:24:29,854 ERROR [regionserver/byd0157/192.168.0.157:16020] 
> zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
> 2015-12-17 21:24:29,854 WARN  [regionserver/byd0157/192.168.0.157:16020] 
> regionserver.HRegionServer: Failed deleting my ephemeral node
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired for /hbase-unsecure/rs/byd0157,16020,1449106975377
>at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
>at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:179)
>at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1345)
>at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1334)
>at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1393)
>at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1076)
>at java.lang.Thread.run(Thread.java:745)
> 2015-12-17 21:24:29,855 INFO  [regionserver/byd0157/192.168.0.157:16020] 
> regionserver.HRegionServer: stopping server byd0157,16020,1449106975377; 
> zookeeper connection closed.
> 2015-12-17 21:24:29,855 INFO  [regionserver/byd0157/192.168.0.157:16020] 
> regionserver.HRegionServer: regionserver/byd0157/192.168.0.157:16020 exiting
> 2015-12-17 21:24:29,858 ERROR [main] regionserver.HRegionServerCommandLine: 
> Region server exiting
> java.lang.RuntimeException: HRegionServer Aborted
>at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68)
>at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
>at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2641)
> 2015-12-17 21:24:29,940 INFO  [Thread-6] regionserver.ShutdownHook: Shutdown 
> hook starting; hbase.shutdown.hook=true; 
> fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@6de54b40
> 2015-12-17 21:24:29,942 INFO  [Thread-6] regionserver.ShutdownHook: Starting 
> fs shutdown hook thread.
> 2015-12-17 21:24:29,953 INFO  [Thread-6] regionserver.ShutdownHook: Shutdown 
> hook finished.



about spark on hbase

2015-12-15 Thread censj
hi,all:
how cloud I through spark function  hbase get value then update this value 
and put this value to hbase ?

Re: About Spark On Hbase

2015-12-15 Thread censj

hi,fight fate
Did I can in bulkPut() function use Get value first ,then put this 
value to Hbase ?


> 在 2015年12月9日,16:02,censj <ce...@lotuseed.com> 写道:
> 
> Thank you! I know
>> 在 2015年12月9日,15:59,fightf...@163.com <mailto:fightf...@163.com> 写道:
>> 
>> If you are using maven , you can add the cloudera maven repo to the 
>> repository in pom.xml 
>> and add the dependency of spark-hbase. 
>> I just found this : 
>> http://spark-packages.org/package/nerdammer/spark-hbase-connector 
>> <http://spark-packages.org/package/nerdammer/spark-hbase-connector> 
>> as Feng Dongyu recommend, you can try this also, but I had no experience of 
>> using this. 
>> 
>> 
>> fightf...@163.com <mailto:fightf...@163.com>
>>  
>> 发件人: censj <mailto:ce...@lotuseed.com>
>> 发送时间: 2015-12-09 15:44
>> 收件人: fightf...@163.com <mailto:fightf...@163.com>
>> 抄送: user@spark.apache.org <mailto:user@spark.apache.org>
>> 主题: Re: About Spark On Hbase
>> So, I how to get this jar? I use set package project.I not found sbt lib.
>>> 在 2015年12月9日,15:42,fightf...@163.com <mailto:fightf...@163.com> 写道:
>>> 
>>> I don't think it really need CDH component. Just use the API 
>>> 
>>> fightf...@163.com <mailto:fightf...@163.com>
>>>  
>>> 发件人: censj <mailto:ce...@lotuseed.com>
>>> 发送时间: 2015-12-09 15:31
>>> 收件人: fightf...@163.com <mailto:fightf...@163.com>
>>> 抄送: user@spark.apache.org <mailto:user@spark.apache.org>
>>> 主题: Re: About Spark On Hbase
>>> But this is dependent on CDH。I not install CDH。
>>>> 在 2015年12月9日,15:18,fightf...@163.com <mailto:fightf...@163.com> 写道:
>>>> 
>>>> Actually you can refer to https://github.com/cloudera-labs/SparkOnHBase 
>>>> <https://github.com/cloudera-labs/SparkOnHBase> 
>>>> Also, HBASE-13992 <https://issues.apache.org/jira/browse/HBASE-13992>  
>>>> already integrates that feature into the hbase side, but 
>>>> that feature has not been released. 
>>>> 
>>>> Best,
>>>> Sun.
>>>> 
>>>> fightf...@163.com <mailto:fightf...@163.com>
>>>>  
>>>> From: censj <mailto:ce...@lotuseed.com>
>>>> Date: 2015-12-09 15:04
>>>> To: user@spark.apache.org <mailto:user@spark.apache.org>
>>>> Subject: About Spark On Hbase
>>>> hi all,
>>>>  now I using spark,but I not found spark operation hbase open source. 
>>>> Do any one tell me? 
> 



Re: About Spark On Hbase

2015-12-09 Thread censj
Thank you! I know
> 在 2015年12月9日,15:59,fightf...@163.com 写道:
> 
> If you are using maven , you can add the cloudera maven repo to the 
> repository in pom.xml 
> and add the dependency of spark-hbase. 
> I just found this : 
> http://spark-packages.org/package/nerdammer/spark-hbase-connector 
> <http://spark-packages.org/package/nerdammer/spark-hbase-connector> 
> as Feng Dongyu recommend, you can try this also, but I had no experience of 
> using this. 
> 
> 
> fightf...@163.com <mailto:fightf...@163.com>
>  
> 发件人: censj <mailto:ce...@lotuseed.com>
> 发送时间: 2015-12-09 15:44
> 收件人: fightf...@163.com <mailto:fightf...@163.com>
> 抄送: user@spark.apache.org <mailto:user@spark.apache.org>
> 主题: Re: About Spark On Hbase
> So, I how to get this jar? I use set package project.I not found sbt lib.
>> 在 2015年12月9日,15:42,fightf...@163.com <mailto:fightf...@163.com> 写道:
>> 
>> I don't think it really need CDH component. Just use the API 
>> 
>> fightf...@163.com <mailto:fightf...@163.com>
>>  
>> 发件人: censj <mailto:ce...@lotuseed.com>
>> 发送时间: 2015-12-09 15:31
>> 收件人: fightf...@163.com <mailto:fightf...@163.com>
>> 抄送: user@spark.apache.org <mailto:user@spark.apache.org>
>> 主题: Re: About Spark On Hbase
>> But this is dependent on CDH。I not install CDH。
>>> 在 2015年12月9日,15:18,fightf...@163.com <mailto:fightf...@163.com> 写道:
>>> 
>>> Actually you can refer to https://github.com/cloudera-labs/SparkOnHBase 
>>> <https://github.com/cloudera-labs/SparkOnHBase> 
>>> Also, HBASE-13992 <https://issues.apache.org/jira/browse/HBASE-13992>  
>>> already integrates that feature into the hbase side, but 
>>> that feature has not been released. 
>>> 
>>> Best,
>>> Sun.
>>> 
>>> fightf...@163.com <mailto:fightf...@163.com>
>>>  
>>> From: censj <mailto:ce...@lotuseed.com>
>>> Date: 2015-12-09 15:04
>>> To: user@spark.apache.org <mailto:user@spark.apache.org>
>>> Subject: About Spark On Hbase
>>> hi all,
>>>  now I using spark,but I not found spark operation hbase open source. 
>>> Do any one tell me? 



Re: About Spark On Hbase

2015-12-08 Thread censj
Can you get me a example?
I want to update base data.
> 在 2015年12月9日,15:19,Fengdong Yu <fengdo...@everstring.com> 写道:
> 
> https://github.com/nerdammer/spark-hbase-connector 
> <https://github.com/nerdammer/spark-hbase-connector>
> 
> This is better and easy to use.
> 
> 
> 
> 
> 
>> On Dec 9, 2015, at 3:04 PM, censj <ce...@lotuseed.com 
>> <mailto:ce...@lotuseed.com>> wrote:
>> 
>> hi all,
>>  now I using spark,but I not found spark operation hbase open source. Do 
>> any one tell me? 
>>  
> 



Re: About Spark On Hbase

2015-12-08 Thread censj
So, I how to get this jar? I use set package project.I not found sbt lib.
> 在 2015年12月9日,15:42,fightf...@163.com 写道:
> 
> I don't think it really need CDH component. Just use the API 
> 
> fightf...@163.com <mailto:fightf...@163.com>
>  
> 发件人: censj <mailto:ce...@lotuseed.com>
> 发送时间: 2015-12-09 15:31
> 收件人: fightf...@163.com <mailto:fightf...@163.com>
> 抄送: user@spark.apache.org <mailto:user@spark.apache.org>
> 主题: Re: About Spark On Hbase
> But this is dependent on CDH。I not install CDH。
>> 在 2015年12月9日,15:18,fightf...@163.com <mailto:fightf...@163.com> 写道:
>> 
>> Actually you can refer to https://github.com/cloudera-labs/SparkOnHBase 
>> <https://github.com/cloudera-labs/SparkOnHBase> 
>> Also, HBASE-13992 <https://issues.apache.org/jira/browse/HBASE-13992>  
>> already integrates that feature into the hbase side, but 
>> that feature has not been released. 
>> 
>> Best,
>> Sun.
>> 
>> fightf...@163.com <mailto:fightf...@163.com>
>>  
>> From: censj <mailto:ce...@lotuseed.com>
>> Date: 2015-12-09 15:04
>> To: user@spark.apache.org <mailto:user@spark.apache.org>
>> Subject: About Spark On Hbase
>> hi all,
>>  now I using spark,but I not found spark operation hbase open source. Do 
>> any one tell me? 



Re: About Spark On Hbase

2015-12-08 Thread censj
But this is dependent on CDH。I not install CDH。
> 在 2015年12月9日,15:18,fightf...@163.com 写道:
> 
> Actually you can refer to https://github.com/cloudera-labs/SparkOnHBase 
> <https://github.com/cloudera-labs/SparkOnHBase> 
> Also, HBASE-13992 <https://issues.apache.org/jira/browse/HBASE-13992>  
> already integrates that feature into the hbase side, but 
> that feature has not been released. 
> 
> Best,
> Sun.
> 
> fightf...@163.com <mailto:fightf...@163.com>
>  
> From: censj <mailto:ce...@lotuseed.com>
> Date: 2015-12-09 15:04
> To: user@spark.apache.org <mailto:user@spark.apache.org>
> Subject: About Spark On Hbase
> hi all,
>  now I using spark,but I not found spark operation hbase open source. Do 
> any one tell me? 



About Spark On Hbase

2015-12-08 Thread censj
hi all,
 now I using spark,but I not found spark operation hbase open source. Do 
any one tell me? 
 

Re: how create hbase connect?

2015-12-07 Thread censj
ok! I try it. 
> 在 2015年12月7日,20:11,ayan guha <guha.a...@gmail.com> 写道:
> 
> Kindly take a look https://github.com/nerdammer/spark-hbase-connector 
> <https://github.com/nerdammer/spark-hbase-connector> 
> 
> On Mon, Dec 7, 2015 at 10:56 PM, censj <ce...@lotuseed.com 
> <mailto:ce...@lotuseed.com>> wrote:
> hi all,
>   I want to update row on base. how to create connecting base on Rdd?
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> <mailto:user-unsubscr...@spark.apache.org>
> For additional commands, e-mail: user-h...@spark.apache.org 
> <mailto:user-h...@spark.apache.org>
> 
> 
> 
> 
> -- 
> Best Regards,
> Ayan Guha



how create hbase connect?

2015-12-07 Thread censj
hi all,
  I want to update row on base. how to create connecting base on Rdd?

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org