That looks like a healthy startup. If you browse to alaukik:60010, do
you see anything? If not from a remote client, can you see a UI if
you browse on the machine alaukik using links or lynx?
If you can get to the UI locally but not remote, then somethings up on
that host with the mapping of ho
Thank you for the feedback and clearing the confusion.
Thanks,
Gayatri
On Mon, Nov 15, 2010 at 3:59 PM, Jonathan Gray wrote:
> > Thank you for the feedback. So to summarize, HBase is doing good for
> > high
> > reads, writes. Update is really writing a new version of the data. So
> > updating i
When I use svn plugin within Eclipse to checkout the source code of HBase,
it comes up with "Operation failed" message, which indicates:
svn: Processing REPORT request response failed: Premature end of file.
(/repos/asf/!svn/vcc/default)
svn: REPORT request failed on '/repos/asf/!svn/vcc/default'
If you try svn from command line do you see the same thing? (Its
working for me FYI).
St.Ack
On Thu, Nov 18, 2010 at 11:20 AM, Marcus Chou wrote:
> When I use svn plugin within Eclipse to checkout the source code of HBase,
> it comes up with "Operation failed" message, which indicates:
>
> svn:
I think not, probably. It is said that the TortoiseSVN or other GUI version,
in my case the plugin in Eclipse, would encounter this problem. And several
weeks ago, there was no trouble during my first time of checking out the
source code.
However, here my trouble is that I could only access the sv
Based on what we saw... there shouldn't be a reason why you don't bump it up to
something north of 32K or even 64K.
Granted our data nodes have 32GB of memory and the fact that we don't have
users on the machine so setting up 64K ulimit -n is really just noise.
I think most Unix/Linux have the
I would say "yes", conditionally.
But indeed you have to use add_table.rb to add the copied over regions to the
META region of the target cluster.
And of course if you copy over table data as HFiles you have to at least
disable the table on the source cluster or shut it down before the copy, s
I can get about 1000 regions per node operating comfortably on a 5 node
c1.xlarge EC2 cluster using:
Somewhere out of /etc/rc.local:
echo "root soft nofile 65536" >> /etc/security/limits.conf
echo "root hard nofile 65536" >> /etc/security/limits.conf
sysctl -w fs.file-max=65536
sysctl -w fs.epol
Okay.. so add_table can be run even if there is an existing table by the
same name on the new DFS? Can you give me a basic idea of how add_table
works/what it really does inside? I am basically interested in dumping
processed HBase data into backup servers instead of dumping raw data. That
would sa
Meanwhile, I was able to roughly estimate which table is getting traffic by
executing the following commands:
1) Store ngrep output in a file (for few seconds)
ngrep -W byline port 60020 > temp.out
2) Find out all the tables that region server has from HBase user interface.
For each table execute
Hi folks,
I'm currently investigating ways to scale a matching engine which is
currently based on stored procedures. The kind of matching it does would be
several columns from multiple sources but usually only two. The data
similarity is quite high in most cases so the match numbers are high. The
Hi Lars,
Perfect. Thanks for confirming. I have some existing code for which I
want to add HBase support
with minimal modifications to the original code base. I think i need
to provide InputFormat containing
TableSplit.
On a side note, i feel the Key and Values in map, reduce, record
reader metho
You can query Stargate.
E.g.
/usr/bin/curl http://$server:8080/status/cluster
You can see region information in the output.
On Thu, Nov 18, 2010 at 9:11 AM, Vaibhav Puranik wrote:
> Meanwhile, I was able to roughly estimate which table is getting traffic by
> executing the following commands:
>
Is this helpful ?
http://stackoverflow.com/questions/2689965/proxy-settings-in-tortoisesvn-and-command-line-svn-client
On Thu, Nov 18, 2010 at 6:06 AM, Marcus Chou wrote:
> I think not, probably. It is said that the TortoiseSVN or other GUI
> version,
> in my case the plugin in Eclipse, would en
No and pretty much no.
You cannot simple throw tables together as that would stuff up region
boundaries most likely. I would only replace or add a new table that
way (if at all). And the add_table.rb is more of a kludge as this was
needed to fix sick clusters in the past. This is not something you
Can the data in this table be partitioned ?
If so, you can use hbase TableInputFormat to consume it.
On Thu, Nov 18, 2010 at 9:15 AM, barrymac wrote:
>
> Hi folks,
>
> I'm currently investigating ways to scale a matching engine which is
> currently based on stored procedures. The kind of matchin
What I'm trying to find is how much improvement in throughput & reduction in
latency can I hope to get by spreading out a table across multiple region
servers.
We have some tables that are wide, but short ... it currently fits in a
single region on a single region server. I'm trying to determine i
FYI You can preconfigure the number of regions when you create your table:
https://issues.apache.org/jira/browse/HBASE-2473
On Thu, Nov 18, 2010 at 11:10 AM, Suraj Varma wrote:
> What I'm trying to find is how much improvement in throughput & reduction
> in
> latency can I hope to get by spreadi
ah, that's what I was wondering, thanks for the information everyone!
hari
On Fri, Nov 19, 2010 at 12:08 AM, Lars George wrote:
> No and pretty much no.
>
> You cannot simple throw tables together as that would stuff up region
> boundaries most likely. I would only replace or add a new table th
Ted,
I looked at /usr/bin/curl http://$server:8080/status/cluster.
But there is no traffic data there. All the data this interface returns is
already available through HBase web interface.
Regards,
Vaibhav
>
On Thu, Nov 18, 2010 at 10:05 AM, Ted Yu wrote:
> You can query Stargate.
> E.g.
> /
Right.
Stargate cluster status is centralized view. I use it to monitor the health
of our cluster by selectively querying rows on each region server.
On Thu, Nov 18, 2010 at 4:19 PM, Vaibhav Puranik wrote:
> Ted,
>
> I looked at /usr/bin/curl http://$server:8080/status/cluster.
>
> But there is
Awesome! It works!
On 19 November 2010 02:28, Ted Yu wrote:
> Is this helpful ?
>
> http://stackoverflow.com/questions/2689965/proxy-settings-in-tortoisesvn-and-command-line-svn-client
>
> On Thu, Nov 18, 2010 at 6:06 AM, Marcus Chou
> wrote:
>
> > I think not, probably. It is said that the Tor
I think I've got the problem:
http://svn.apache.org/repos/asf/hbase/trunk is okay, while
http://svn.apache.org/repos/asf/hbase is not, checking out of which would
still encounter a similar problem:
"svn: REPORT of '/repos/asf/!svn/vcc/default': Compressed response was
truncated (http://svn.apache.
Hi,
I am using hbase shell to verify some konwleges that I have read from
the post of "Understanding HBase and BigTable"
That post says: "If an application asks for a given row at a given timestamp,
HBase will return cell data where the timestamp is less than or equal to
the one provided."
I'd have to look at the shell code, but its probably that the shell is
building the query to be 'give me the KV at TS=19'. The Java API lets
you specify both 'give me exact time' and 'give me time range'.
-ryan
On Thu, Nov 18, 2010 at 9:17 PM, Pan W wrote:
> Hi,
>
> I am using hbase shell to ve
Ah, that's a nice feature - thanks for pointing this out. Once I get on
0.90, I can probably use this instead of scripting splits.
Thanks,
--Suraj
On Thu, Nov 18, 2010 at 11:13 AM, Ted Yu wrote:
> FYI You can preconfigure the number of regions when you create your table:
> https://issues.apache
We currently used patched 0.20.6 with HBASE-2473 in production.
On Thu, Nov 18, 2010 at 9:45 PM, Suraj Varma wrote:
> Ah, that's a nice feature - thanks for pointing this out. Once I get on
> 0.90, I can probably use this instead of scripting splits.
>
> Thanks,
> --Suraj
>
> On Thu, Nov 18, 201
Hi, ryan, just as you say , now I am using setTimeRange to fulfill my request.
I am just curious on the 'less than' concept,
'If an application asks for a given row at a given timestamp,
HBase will return cell data where the timestamp is less than or
equal to the one provided.'
esp
Hello,
Thanks to apurtells github repo of the hbase-ec2 i managed to start an
hbase cluster.
Everything works nicely, I can check the uis of the JT/NN and Hbase Master.
What I cant see are the ganglia metrics despite the url provided by the proxy
http://ec2-a-b-c-d.compute-1.amazonaws.com/gangli
29 matches
Mail list logo