Thanks for your response.
Regards!
Yong
On Sat, Mar 17, 2012 at 10:09 PM, Stack wrote:
> On Sat, Mar 17, 2012 at 1:06 AM, yonghu wrote:
>> Hello,
>>
>> I have used the command ./hbase
>> org.apache.hadoop.hbase.mapreduce.Export 'test'
>> http://loc
Hello,
I have used the command ./hbase
org.apache.hadoop.hbase.mapreduce.Export 'test'
http://localhost:8020/test to export the data content from the test
table. And I can see the exported content from my hdfs folder,
hdfs://localhost/test/part-m-0. I have tried 2 commands to read
the content
I noticed the problem is that somehow I lost the data from hdfs. The
code is ok.
Regards!
Yong
On Fri, Mar 16, 2012 at 5:59 PM, yonghu wrote:
> I implemented the code like this way. My Hbase version is 0.92.0.
>
> Configuration conf = new Conf
c04e87fc7a9.
I can see the results from the command line. I want to know that why
its code dose not work?
Regards!
Yong
On Fri, Mar 16, 2012 at 5:50 PM, yonghu wrote:
> Thanks for your information.
>
> Regards!
>
> Yong
>
> On Fri, Mar 16, 2012 at 5:39 PM, Stack wrote:
Thanks for your information.
Regards!
Yong
On Fri, Mar 16, 2012 at 5:39 PM, Stack wrote:
> On Fri, Mar 16, 2012 at 9:01 AM, yonghu wrote:
>> Hello,
>>
>> Can anyone give me a example to construct the HFileReaderV2 object to
>> read the HFile content.
>&
Hello,
One HFile consists of many blocks. Suppose we have two blocks, b1 and
b2. The size of each block is 2K. In b1, we have two key-value pairs,
whose keys are t1 and t2, separately. Each key-value pair is 1K. So
the b1 is full. Suppose that now we insert a new tuple which key is
also t1. The HB
ile.Writer hwriter = new HFile.Writer(fs, new
Path("hdfs://localhost:8020/test"), 2, (Compression.Algorithm)null,
null);
the data is stored in the hdfs.
regards!
Yong
On Tue, Mar 6, 2012 at 5:10 PM, Stack wrote:
> On Tue, Mar 6, 2012 at 8:07 AM, yonghu wrote:
>> Thanks for your re
Thanks for your reply. I have already solve the problem.
Yong
On Tue, Mar 6, 2012 at 5:02 PM, Stack wrote:
> On Tue, Mar 6, 2012 at 6:48 AM, yonghu wrote:
>> Thanks for your reply. But I am using hbase 0.90.2. There is no
>> HFileWriterV2 class. Can you show me how to
Thanks for your reply. But I am using hbase 0.90.2. There is no
HFileWriterV2 class. Can you show me how to use HFileWriter
constructor?
Thanks
Yong
On Tue, Mar 6, 2012 at 3:19 PM, Konrad Tendera wrote:
> yonghu writes:
>
>>
>> Hello,
>>...
>
> try something lik
Hello,
I wrote a single program to directly write data content to HFile.
Hbase is installed as pseudo-mode.
here is my code:
public static void putData() throws Exception{
FileSystem fs = new RawLocalFileSystem();
fs.setConf(new Configuration());
//
is the first region in a table. If region has both an empty start
> and an empty end key, its the only region in the table"
>
>
>
>
> On 3/5/12 7:27 AM, "yonghu" wrote:
>
>>Hello,
>>
>>My HBase version is 0.90.2 and installed in pseu
Hello,
My HBase version is 0.90.2 and installed in pseudo mode. I have
successfully inserted two tuples in the 'test' table.
hbase(main):005:0> scan 'test'
ROWCOLUMN+CELL
jim column=course:english,
timestamp=1330949116240, value=1.3
tom
rowse/HBASE-3171 may
> simplify this someday :)
>
> 2012/2/10 yonghu :
>> Thanks!
>> I know this. I just want to know which nodes store this information
>> when the client first contact to HBase cluster, HMaster or
>> RegionServer or a special node in which runs the
in a separate nodes?
Yong
On Fri, Feb 10, 2012 at 12:16 PM, Roger wrote:
> To my knowledge, it is a three level tree-like structure.
> --
> 该邮件从移动设备发送
>
> -- Original --
> From: "yonghu"
> Date: Fri, Feb 10, 2012 0
Hello,
I read some articles which mention before the client connect to the
master node, he will first connect to the zookeeper node and find the
location of the root node. So, my question is that the node which
stores the root information is different from master node or they are
the same node?
T
t;
> -- Lars
>
>
> - Original Message -
> From: yonghu
> To: user@hbase.apache.org; lars hofhansl
> Cc:
> Sent: Thursday, January 26, 2012 1:22 PM
> Subject: Re: the occasion of the major compact?
>
> yes. I read this blog
> http://hadoop-hbase.blogspot.com/201
hofhansl wrote:
> Unless you have HBASE-4536 (only in trunk, though) or are parsing the HFiles
> yourself you have no way of actually getting to the deleted data.
>
> -- Lars
>
>
>
> - Original Message -
> From: yonghu
> To: user@hbase.apache.org
> Cc:
&
aining major compaction
> logic:
> http://search-hadoop.com/m/JR9sK1xnbj21
> http://search-hadoop.com/m/X7W7q1xnbj21
>
>
> The vast majority of users need features completely unrelated to
> compactions. The compaction algorithm is an easy target to worry about.
>
>
> On
> Anyway i'm digging into 0.92, I hope to get those insight soon.
>
> Mikael.S
>
> On Thu, Jan 26, 2012 at 4:39 PM, yonghu wrote:
>
>> Thanks for your response.
>>
>> I knew that major compact can be triggered based on client, time and
>> size. In my s
n 26, 2012 at 3:51 PM, Damien Hardy wrote:
>
>> Le 26/01/2012 14:43, yonghu a écrit :
>> > Hello,
>> >
>> > I read this blog http://outerthought.org/blog/465-ot.html. It mentions
>> > that every 24 hours the major compaction will occur. My question is
&
yes
On Tue, Jan 24, 2012 at 2:01 PM, sangeetha k wrote:
> Harsh,
>
> Thanks for the response.
>
> Do I need to setup both hadoop and Hbase in distributed mode?
>
> Thanks,
> Sangeetha K
>
>
>
> From: Harsh J
> To: user@hbase.apache.org; sangeetha k
> Sent: Tuesd
gt; settings we used are still commented out at the bottom.
>
> HTH
>
> Tom
>
> ____
> From: Leonardo Gamas [leoga...@jusbrasil.com.br]
> Sent: 06 January 2012 15:05
> To: user@hbase.apache.org
> Subject: Re: zookeeper connection prob
et 2181
>
> What happens?
>
> In the server:
>
> $ netstat -na | grep LISTEN | grep 2181
>
> What is printed?
>
> 2012/1/6 yonghu
>
>> Thanks for your response. I use Ubuntu 10.04 and the following is the
>> command information I got
>>
>
han to modify the rules.
>
> We have successfully combined (Hadoop 0.20.2 with HBase 0.90.4) and (Hadoop
> 1.0.0 with HBase 0.92).
>
> HTH
>
> Tom
>
> From: yonghu [yongyong...@gmail.com]
> Sent: 05 January 2012 21:22
> To: user@
nd Hbase 0.90.3 version are
compatible.
Yong
On Thu, Jan 5, 2012 at 6:32 PM, Royston Sellman
wrote:
> Just to check - did you disable it the way Tom suggested? By stopping
> iptables?
> It's not sufficient just to turn off firewall from the control panel/app.
>
> Best,
> Royston
I have already disabled ipv6 and close the firewall, but I still get
the same problem. :(
Yong
On Thu, Jan 5, 2012 at 5:16 PM, yonghu wrote:
> I set up a pseudo mode, it also needs to close the firewall?
>
> Yong
>
> On Thu, Jan 5, 2012 at 4:59 PM, Tom Wilcox wrote:
>>
: Leonardo Gamas [leoga...@jusbrasil.com.br]
> Sent: 05 January 2012 15:45
> To: user@hbase.apache.org
> Subject: Re: zookeeper connection problem in pseudo mode
>
> I had a similar problem some time ago. The problem seems to be in the ipv6
> configuration. I solved it disabl
Hallo,
I tried the pseudo distribution mode of HBase. The Hadoop version is
0.20.2. I configured the hbase-site.xml as
hbase.rootdir
hdfs://localhost:9000/hbase
dfs.replication
1
as same as in core-site.xml and hdfs-site.xml files of Hadoop, separately.
I can succe
; Hot regions can be split, etc.
>
> According to documentation found here
> https://issues.apache.org/jira/browse/HDFS-265, hflush only returns to client
> when all nodes
> in the pipeline have sync'ed the data.
>
> -- Lars
>
>
> - Original Message ---
milyException("Empty family is invalid");
> }
> checkFamily(family);
> }
> }
> }
>
>
>
> On Sat, Nov 26, 2011 at 2:47 AM, yonghu wrote:
>
> > But I just considered about the efficiency. Why HBase does not directly
>
6, 2011 at 1:14 AM, yonghu wrote:
>
> > hello,
> >
> > I read http://hbase.apache.org/book/versions.html and have a question
> > about
> > delete operation. As it mentions, the user can delete a whole row or
> delete
> > a data version of cell. The del
hello,
I read http://hbase.apache.org/book/versions.html and have a question about
delete operation. As it mentions, the user can delete a whole row or delete
a data version of cell. The delete operation of data version of cell is
just to write a tombstone marker for that version. I want to know h
Hello,
I read the blog
http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html and
tried to understand the architecture of the HBase. There is one thing that
makes me confusing. If I understand right, the client must connect the
Zookeeper cluster to get the metadata of particular tab
101 - 133 of 133 matches
Mail list logo