Please use user@ in the future.
You said:
zk session timout is 40s
Default value is 90s. Why did you configure it with lower value ?
The "RegionServer ephemeral node deleted" message means that znode for
olap3.data.lq,16020,1470799848293
expired.
Can you pastebin JVM parameters (are you using
StochasticLoadBalancer by default would balance regions evenly across the
cluster.
Regions of particular table may not be evenly distributed.
Increase the value for the following config:
private static final String TABLE_SKEW_COST_KEY =
Thanks Ted for the maxregionsize per table idea. We will try to keep it
around 1-2 Gigs and see how it goes. Will this also make sure that the
region migrates to another region server? Or do we still need to do that
manually?
On JMX, Since the environment is production, we are yet unable to use
bq. We cannot change the maxregionsize parameter
The region size can be changed on per table basis:
hbase> alter 't1', MAX_FILESIZE => '134217728'
See the beginning of hbase-shell/src/main/ruby/shell/commands/alter.rb for
more details.
FYI
On Sun, Aug 28, 2016 at 10:44 PM, Manish Maheshwari
If you take my code then it should work. I have tested it on Hbase 1.2.1.
On Aug 29, 2016 12:21 PM, "spats" wrote:
> Thanks Sachin.
>
> So it won't work with hbase 1.2.0 even if we use your code from shc branch?
>
>
>
>
> --
> View this message in context:
Cycling old bits:
http://search-hadoop.com/m/YGbb3E2a71UVLBK=Re+HBase+Count+Rows+in+Regions+and+Region+Servers
You can use /jmx to inspect regions and find the hotspot.
On Mon, Aug 29, 2016 at 7:29 AM, Manish Maheshwari
wrote:
> Hi Dima,
>
> Thanks for the suggestion. We
Hi Dima,
Thanks for the suggestion. We can load the data in heap, but Hbase makes it
easier for one to write and another to read. With heap we need to build a
process to handle both processes and also write to log so as to not lose
the updates in case of process failure.
Thanks
Manish
On Aug
Thanks Sachin.
So it won't work with hbase 1.2.0 even if we use your code from shc branch?
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Issues-with-Spark-On-Hbase-Connector-tp4082151p4082162.html
Sent from the HBase User mailing list archive at Nabble.com.
(Though if it is only 7 GB, why not just store it in memory?)
On Sunday, August 28, 2016, Dima Spivak wrote:
> If your data can all fit on one machine, HBase is not the best choice. I
> think you'd be better off using a simpler solution for small data and leave
> HBase for
If your data can all fit on one machine, HBase is not the best choice. I
think you'd be better off using a simpler solution for small data and leave
HBase for use cases that require proper clusters.
On Sunday, August 28, 2016, Manish Maheshwari wrote:
> We dont want to
Hi Sudhir,
There is connection leak problem with hortonworks hbase connector if you
use hbase 1.2.0.
I tried to use hortonwork's connector and felt into the same problem.
Have a look at this Hbase issue HBASE-16017 [0]. The fix for this was
backported to 1.3.0, 1.4.0 and 2.0.0
I have raised a
11 matches
Mail list logo