I executed some commands in hbase's shell and got the following results:
hbase(main):014:0 list
TABLE
testtable
1 row(s) in 0.0180 seconds ### what does 1 row(s) mean?
hbase(main):015:0 count 'testtable'
Current count: 1000, row: row-999
1000 row(s) in 0.3300 seconds ### indeed
Hi
It describes the (x) rows describes the number of items retrieved to
display in the output and seconds says the time taken for displaying
it.
Incase of puts as there are not rows to be retrieved and displayed the (x)
rows remains 0, but the xxx seconds says the time it took for doing that
Ok, I see, thanks ramkrishna.
2014-02-11 17:25 GMT+08:00 ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com:
Hi
It describes the (x) rows describes the number of items retrieved to
display in the output and seconds says the time taken for displaying
it.
Incase of puts as there
Hi,
I am hbase newbie, maybe there is simpler solution but this will work. I
tried estimating size using HDFS but it is not best solution(see link [1]);
You dont need to work with TableSplits., look at class
org.apache.hadoop.hbase.util.RegionSizeCalculator.
It can do what you need. Create
Hi all,br/Is there any API to read server side properties which are set in
hbase-site.xml..?? I did not get any information in the net.br/br/Thanks
and regardsbr/Vinay kashyap
Hi,
You can do a curl master/RS web ui:port/conf to get the current
configs that are being used by hbase daemons. You get the output in xml
format.
- Bharath
On Tue, Feb 11, 2014 at 3:38 PM, Vinay Kashyap vinay_kash...@ymail.comwrote:
Hi all,br/Is there any API to read server side
Moving to user@, dev@ in bcc
Hi,
Calculation is pretty simple.
Let's say you want to have at least 3 nodes in 3 VMs. And you local os.
That's 4 computers in one hardware.
You want to have AT LEAST 2 cores and 4GB per host, so you need a minimum
or 16GB and 8 cores to host all of that in a
Hi bharath,br/Thanks for the info.. But is there any java API exposed to get
same information..??br/br/Thanks and regardsbr/Vinay Kashyap
How about
BaseConfiguration.create() in package org.apache.hadoop.hbase?
Lukas
On 11.2.2014 16:39, Vinay Kashyap wrote:
Hi bharath,br/Thanks for the info.. But is there any java API exposed to get same
information..??br/br/Thanks and regardsbr/Vinay Kashyap
Minor correction: HBaseConfiguration.create()
This assumes access to hbase-site.xml is provided.
On Tue, Feb 11, 2014 at 8:41 AM, Lukas Nalezenec
lukas.naleze...@firma.seznam.cz wrote:
How about
BaseConfiguration.create() in package org.apache.hadoop.hbase?
Lukas
On 11.2.2014 16:39,
I would like to know what configuration causes mapreduce to have only one
map while input split of 1 and lines per map of 1000 are set in job
configuration.
Its a 2 node cluster and i tried scan with startRow and endRow.
I want to have atleast 2 maps, one on each machine.
Hi Tousif,
You will have one map per region.
What is your table format for now? How many regions? How many CFs, etc.?
JM
2014-02-11 5:59 GMT-05:00 Tousif tousif.pa...@gmail.com:
I would like to know what configuration causes mapreduce to have only one
map while input split of 1 and
Do you have just one region for this table?
On Tue, Feb 11, 2014 at 2:59 AM, Tousif tousif.pa...@gmail.com wrote:
I would like to know what configuration causes mapreduce to have only one
map while input split of 1 and lines per map of 1000 are set in job
configuration.
Its a 2 node
I am trying to use snapshot+WALPlayer for HBase DR for our cluster in AWS.
I am trying to do below to verify it, seems the new data is not being
played into the new table. Anything wrong with my steps?
1. Populate TestTable using PeformanceEvaluation Tool
2. count the rows being written, 63277
I think that the problem here is that you're trying to replay the WAL for
TestTable-clone entries..
which are not present... you probably want to replay the entries from the
original TestTable.
I think that you can specify a mapping.. something like WalPlayer TestTable
TestTable-cloned
Matteo
Thanks, that works. the new table has more data. I will verify the count.
Thanks
Tian-Ying
On Tue, Feb 11, 2014 at 11:42 AM, Matteo Bertozzi
theo.berto...@gmail.comwrote:
I think that the problem here is that you're trying to replay the WAL for
TestTable-clone entries..
which are not
I am out of the office until 02/16/2014.
Dear Sender,
Please note that I'm on a business trip and will be back to office next
Sunday (16 Feb 2014). You may experience a delay in my response; for urgent
matters, please contact my manager Mohamed Obide (mob...@eg.ibm.com).
Best Regards,
Anas
We've also recently updated
http://hbase.apache.org/book/ops.capacity.htmlwhich contains similar
numbers, and some more details on the items to
consider for sizing.
Enis
On Sat, Feb 8, 2014 at 10:12 PM, Ramu M S ramu.ma...@gmail.com wrote:
Thanks Lars.
We were in the process of building
Hi,
I'm using the HBase Client API to connect to a remote cluster and do
some operations. This project will certainly require hbase and
hadoop-core jars. And my question is whether I should use 'java'
command and handle all the dependencies (using maven shaded plugin, or
set the classpath
Hi,
To process the data in Hbase. You can have different options.
1. Java program using Hbase api;
2. MapReduce program;
3. High-level languages, such as Hive or Pig (built on top of MapReduce);
4. Phoenix also a High-level language (built based on coprocessor).
which one you should use depends
20 matches
Mail list logo