btw, i disable the block cache for the hyperloglog table to avoid the cache
pollution
2014-09-04 14:29 GMT+08:00 Bi,hongyu—mike :
> Hi all,
>
> we store serialised hyperloglog object into hbase by use of coprocessor,
> and the size distribution is below:
>Row size (bytes):
>m
Hi all,
we store serialised hyperloglog object into hbase by use of coprocessor,
and the size distribution is below:
Row size (bytes):
min = 4279.00
max = 770757.00
mean = 67340.24
stddev = 153968.88
median = 14453.00
Now I understand the port of Jetty. But why the port 16010 doesn't work?
Should I get "
when I run `curl 127.0.0.1:16010`?
On Thu, Sep 4, 2014 at 10:54 AM, Ted Yu wrote:
> When starting master, you should see a line in the following form:
>
> 2014-09-03 19:46:56,866 INFO [main] http.HttpServer
Yes. See HBASE-5349 which is in the upcoming 1.0 release.
Cheers
On Wed, Sep 3, 2014 at 7:57 PM, 牛兆捷 wrote:
> Thanks zhihong.
>
> Once the region server starts, can the memory size of these buffer (e.g.,
> memstore, block cache) be adjusted/resizable dynamically?
>
>
> 2014-09-04 5:28 GMT+08:0
Thanks zhihong.
Once the region server starts, can the memory size of these buffer (e.g.,
memstore, block cache) be adjusted/resizable dynamically?
2014-09-04 5:28 GMT+08:00 Ted Yu :
> If you go to region server Web UI, rs-status#memoryStats, you would see
> Memstore Size under Server Metrics.
When starting master, you should see a line in the following form:
2014-09-03 19:46:56,866 INFO [main] http.HttpServer: Jetty bound to port
63235
16010 and 63235 are used by the master process:
TYus-MacBook-Pro:s tyu$ lsof -n -i4TCP:16010
COMMAND PID USER FD TYPE DEVICE SIZE/O
Thank @ted. But I can only get HMaster info from port 57944. Could you
print out all the ports that standalone HBase exposes? Do them all have
default values or randomly change?
On Thu, Sep 4, 2014 at 9:44 AM, Ted Yu wrote:
> 16010 is the default info port.
>
> Cheers
>
> On Sep 3, 2014, at 6:1
16010 is the default info port.
Cheers
On Sep 3, 2014, at 6:13 PM, tobe wrote:
> That's wired. I get nothing with `lsof -n -i4TCP:16010`.
>
> root@emacscode:/opt/hbase/bin# lsof -n -i4TCP:16010
> root@emacscode:/opt/hbase/bin# netstat -nltup
> Active Internet connections (only servers)
> Prot
That's wired. I get nothing with `lsof -n -i4TCP:16010`.
root@emacscode:/opt/hbase/bin# lsof -n -i4TCP:16010
root@emacscode:/opt/hbase/bin# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp
I used your command to start master (using trunk):
$ lsof -n -i4TCP:16010
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java39528 tyu 178u IPv6 0xd5c4ef153af26eff 0t0 TCP *:16010
(LISTEN)
For region server:
$ bin/hbase regionserver start
2014-09-03 17:57:37,98
I clone the latest code from git://git.apache.org/hbase.git and run `mvn
clean package -DskipTests` to compile. Not change anything and run
`./bin/hbase master start`. I have tried in several machines. Could you
have a try or tell me what's wrong with my procedures? Now we're using
0.94.11 and it d
Sanjiv,
If you're feeling adventurous (and are running on a Linux machine), check
out https://issues.apache.org/jira/browse/HBASE-11885, which was opened to
address the difficulties you've encountered.
-Dima
On Wed, Sep 3, 2014 at 4:39 PM, Ted Yu wrote:
> bq. hbase.regionserver.global.memstor
bq. hbase.regionserver.global.memstore.size is 0.4 hfile.block.cache.size
is 68.09002
Can you check the values for the following configs ?
hfile.block.cache.size (default 0.4)
hbase.bucketcache.size
Cheers
On Tue, Sep 2, 2014 at 5:12 AM, @Sanjiv Singh
wrote:
> I checked at both hbase-defau
I think somehow you must be looking at a different hbase-site.xml file. Can
you please create a new user on your system and run through the whole
Quickstart chapter (http://hbase.apache.org/book.html#quickstart) with
everything fresh, and let us know what happens? You certainly shouldn't
have any b
If you go to region server Web UI, rs-status#memoryStats, you would see
Memstore Size under Server Metrics.
The Stats tab under Block Cache would show statistics of block cache.
The Storefile Metrics tab under Regions would show Index Size.
The above is from 0.98.5 release.
Cheers
On Tue, Sep
Ok , I figured it out, when flush the memstore out to HFile, it use the
scanner which masks the Put that is earlier than the delete, so the HFile
will not contain the Put.
Thanks
Tian-Ying
On Wed, Sep 3, 2014 at 11:27 AM, Tianying Chang wrote:
> Hi,
>
> I did a small test, want to verify that
Hi,
I did a small test, want to verify that DeleteMarker is placed before any
Put of the same row/version, for example:
put 't1', 'cf1', 'v1', 100
delete 't1', 'cf1', 100
flush 't1'
But when I dump the hfile being flushed out with command below:
hbase org.apache.hadoop.hbase.io.hfile.HFile -p -
Which release are you running ?
If you're running trunk build, from HConstants.java :
public static final int DEFAULT_MASTER_INFOPORT = 16010;
Cheers
On Wed, Sep 3, 2014 at 7:03 AM, tobe wrote:
> It's a little wired when I ran the standalone HBase cluster from trunk. I
> notice that the def
hi ted,
i verified both regionserver and hmaster logs,
no kind of exception is seen in regionserver,
but when regionserver dies, i see these messages in hmaster logs
2014-09-03 03:50:37,496 DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Looked up root region l
It's a little wired when I ran the standalone HBase cluster from trunk. I
notice that the default RegionServer info port is not 16010.
And when I explicitly set hbase.regionserver.info.port, it doesn't work. It
changes every time I run.
Hello Serega.
Looks like closed HTable.pool.
Thats could happen if HTable is closed while operation in progress (put in
your case). Check that HTable is not closed too early or concurrently with
put operation.
On Wed, Sep 3, 2014 at 12:15 AM, Serega Sheypak
wrote:
> Hi, I'm using HBase CDH 4.7
Hi again,
after a bit of work it finally runs the job. The problem was that I always
tried to run the MapReduce-jobs directly from eclipse. This worked with
Hadoop 1.2.1 but not with Hadoop 2.4.1. Running the job with ./hadoop jar
mapredtest.jar works fine with both versions.
So here I have an add
22 matches
Mail list logo