Hi Debu
We dont have pyspark hbase connector for now, for now, hbase-spark in
https://github.com/apache/hbase/tree/master/hbase-spark only support JAVA
and Scala
On Sat, Sep 30, 2017 at 8:38 PM, wrote:
> Hi,
>Is there any pyspark HBase connector available for querying and
> writing d
Congratulations, Chia-Ping!
On Fri, Sep 29, 2017 at 3:27 PM, Wei-Chiu Chuang
wrote:
> My sincere congratulations!
>
> On Fri, Sep 29, 2017 at 3:22 PM, Ted Yu wrote:
>
> > Congratulations, Chia-Ping.
> >
> > On Fri, Sep 29, 2017 at 3:19 PM, Misty Stanley-Jones
> > wrote:
> >
> > > The HBase PMC
>From your setup in the tutorial [1], I saw it set HBASE_MANAGES_ZK=true and
zookeeper.clientport=, but in your mapreduce code you set it as 2181,
maybe that is the reason, it got stuck.
On Fri, Apr 21, 2017 at 9:12 AM, Ted Yu wrote:
> Evelina:
> Was hbase-site.xml on the classpath for your
Jing,
I think you could follow this kind of log to check rs log to find why it
could not open root region.
2012-12-07 10:48:48,708 INFO
org.apache.hadoop.hbase.master.AssignmentManager: Assigning region
-ROOT-,,0.70236052 to dn004,60020,1353922884530
Regards,
Yi
On Fri, Dec 7, 2012 at 11:49 AM,
your log, it's getting socket timeouts from the
> Datanode-side. Were you maxing your disks? What was going there?
>
> Hope this helps,
>
> J-D
>
> On Tue, Feb 28, 2012 at 10:04 PM, Yi Liang wrote:
> > We're running hbase 0.90.3 with hadoop cdh3u2. Today, we ran
Excuse me for my poor english...
I meant neither the M/R jobs nor thrift servers would execute
the HBaseAdmin.tableExists...
2011/12/29 Yi Liang
> Sorry, I forgot there's another kind of client process, the Java MapReduce
> jobs to write data. I don't restart them either.
put
operations. The M/R jobs are used to put and get data, the thrift servers
are used to get rows of data. All tables were created once, and never
altered/deleted any more.
2011/12/29 Yi Liang
> Lars, Ram:
>
> I don't restart client processes(in my case, they're thrift servers),
p.apache.org
> 主题: Re: Read speed down after long running
>
> When you restart HBase are you also restarting the client process?
> Are you using HBaseAdmin.tableExists?
> If so you might be running into HBASE-5073
>
> -- Lars
>
> Yi Liang schrieb:
>
> >Hi all,
>
Hi all,
We're running hbase 0.90.3 for one read intensive application.
We find after long running(2 weeks or 1 month or longer), the read speed
will become much lower.
For example, a get_rows operation of thrift to fetch 20 rows (about 4k size
every row) could take >2 second, sometimes even >5 s
Ferro
>
> On Nov 24, 2011, at 08:38 , Yi Liang wrote:
>
> > We're using hbase-0.90.3 with thrift client, and have encountered some
> > problems when we want to delete one specific version of a cell.
> >
> > First, there's no corresponding thrift api fo
We're using hbase-0.90.3 with thrift client, and have encountered some
problems when we want to delete one specific version of a cell.
First, there's no corresponding thrift api for Delete#deleteColumn(byte []
family, byte [] qualifier, long timestamp). Instead, deleteColumns is
supported in mutat
>From the javadoc of HTable:
"This class is not thread safe for updates; the underlying write buffer
can be corrupted if multiple threads contend over a single HTable instance."
Does that mean HTable is thread safe if we only use it to get rows?
Thanks,
Yi
For people who don't want to restart the whole cluster, I have solved the
problem by restarting master alone and manually cleaning the table's ZK
state.
Thanks Jia for the suggestion of restarting master alone.
Thanks,
Yi
On Fri, Aug 5, 2011 at 5:35 PM, Yi Liang wrote:
> Looks l
le cluster for this region?
Thanks,
Yi
On Thu, Aug 4, 2011 at 10:07 AM, Yi Liang wrote:
> HI J-D,
>
> I have tried to force unassign it with shell command 'unassign
> HistoryNoticeInc,,1311223940614.aaa8d345f5b7b6a69b786fe6d14ed9fa.', true',
> but it didn't he
f I have made any mistake.
Thanks,
Yi
On Tue, Aug 2, 2011 at 4:38 AM, Jean-Daniel Cryans wrote:
> You need to force unassign it using the shell.
>
> J-D
>
> On Mon, Aug 1, 2011 at 12:33 AM, Yi Liang wrote:
> > We're running hbase 0.90.3. For some unknown reason, w
We're running hbase 0.90.3. For some unknown reason, we now can't disable
one table because its first region can't be unassigned.
The log message looks like following and it repeats endlessly:
2011-07-25 13:27:23,745 INFO
org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition
tim
n the filesystem move the contents from that region's
> folder to the other one and finally delete that folder.
>
> J-D
>
> On Fri, Jul 22, 2011 at 2:25 AM, Yi Liang wrote:
> > Hi all,
> >
> > Is there a way to delete one region from table?
> >
> >
Hi all,
Is there a way to delete one region from table?
We now have two regions with the same startkey in our table, one of them is
wrong(it is empty), how can I delete it safely?
Thanks,
Yi
Thanks you Stack!
On Wed, Feb 23, 2011 at 6:25 AM, Stack wrote:
> On Mon, Feb 21, 2011 at 10:04 PM, Yi Liang wrote:
> > Yes, the server zcl crashed at that time.
> >
> > But after I restarted it later, it's still in the dead server list.
> >
>
> We faile
ty issue:
>
> java.net.NoRouteToHostException: No route to host
>
> On Sun, Feb 20, 2011 at 10:09 PM, Yi Liang wrote:
>
> > The related log is at: http://pastebin.com/0a1CjDUD
> >
> > It's ok now after restarting hbase, but still curious why it happend.
> >
&g
ms it's not happening? Unfortunately without the log nobody
> can'tell why. If you can post the complete log in pastebin or put it
> on a web server then we could take a look.
>
> J-D
>
> On Fri, Feb 18, 2011 at 12:39 AM, Yi Liang wrote:
> > Hi all,
> >
> > We
Hi all,
We have a hbase cluster with 10 region servers running HBase 0.90.0 + CDH3.
We're now importing big data into HBase.
During the process, 2 servers crashed, but after restaring them, they're no
longer assigned with any region, while regions on other servers keep
splitting when more data in
Hi all,
We have a hbase cluster with 10 region servers running HBase 0.90.0 + CDH3.
We're now importing big data into HBase.
During the process, 2 servers crashed, but after restaring them, they're no
longer assigned with any region, while regions on other servers keep
splitting when more data in
23 matches
Mail list logo