is somewhat related to the size of the
> client screen that displays the values on a map.
> Normally a client requests the values for the area that is displayed on
> the screen.
>
>
> -Original Message-
> From: Alok Kumar [mailto:alok...@gmail.com]
> Sent: Tuesday, Aug
ss the data density)
>
> For me, it seems that it would be more efficient if there is one column
> family for each field, since it would cost less disk I/O, for only the
> needed column data will be read.
>
>
>
> Can the table have 130 column families for this case?
>
> Or the whole columns must be in one column family?
>
>
>
> Thanks.
>
>
>
>
>
--
Alok Kumar
Email : alok...@gmail.com
http://sharepointorange.blogspot.in/
http://www.linkedin.com/in/alokawi
s what I am seeing: High CPU, no disk I/O and network
> I/O is happening at the rate of 6~7MB secs.
>
>
> Because of this, if I scan the entries of the table using Hive it is taking
> ages.
> Basically it is taking around 24 hours to scan the table. Any idea, of how
> to debug.
>
>
> -Vibhav
>
--
Alok Kumar
do you have any guide about copreccesor? how to use it?
>
>
>Thanks!
>
> beatls
>
> On Sat, Jan 5, 2013 at 8:51 AM, Azuryy Yu wrote:
>
> > Coprocessor: rowcount
>
--
Alok Kumar
to
> > produce HFile, Its failing
> > saying that It can not find HFileOutputFormat Class.
> >
> > Here is the full error log.
> > http://pastebin.com/Mz2Cmbbp
> >
> > I think I am missing something here.
> >
> > Please help me out.
> >
> > Thanks & Regards,
> > Jai K Singh
> >
> >
>
--
Alok Kumar
ilter filter2 = new
> > > SingleColumnValueFilter(
> > >
> > >
> > > Bytes.toBytes("mylogs"), Bytes.toBytes("pcol"),
> > >
> > >
> > > CompareOp.EQUAL, comp2);
> > >
> > > //filter1.setFilterIfMissing(true);
> > >
> > > list.addFilter(filter2);
> > >
> > >
> > >
> > > scan.setFilter(list);
> > >
> > >
> > >
> > > scanner = table.getScanner(scan);
> > >
> > > System.out.println("Results of
> > scan:");
> > >
> > > for (Result result : scanner) {
> > >
> > > for (KeyValue kv :
> > > result.raw()) {
> > >
> > >
> > > System.out.print("ROW : " + new String(kv.getRow()) + " ");
> > >
> > >
> > > System.out.print("Family : " + new String(kv.getFamily()) + " ");
> > >
> > >
> > > System.out.print("Qualifier : " + new String(kv.getQualifier()) + "
> > ");
> > >
> > >
> > > System.out.println("KV: " + kv + ", Value: "
> > >
> > >
> > > + Bytes.toString(kv.getValue()));
> > >
> > > }
> > >
> > > }
> > >
> > > scanner.close();
> > >
>
>
--
Alok Kumar
lease excuse typos.
>
> -Original Message-
> From: Alok Kumar
> Date: Fri, 24 Aug 2012 13:30:36
> To: ;
> Reply-To: u...@hive.apache.org
> Subject: alter external table location with new namenode address
>
> Hello,
>
> We have hive external table mapped to hbase, now mo
Hello,
We have hive external table mapped to hbase, now moving
from pseudo distributed to fully distributed hadoop cluster.
found that hive queries are still pointing to older namenode address
ie: hdfs://localhost:9000/user/hive/warehouse/ as it stores
full uri in its derby metastore.
Q . what w
Thank You JD. Now I can update my codebase accordingly.
-Alok
On Wed, Jul 25, 2012 at 8:28 PM, Jean-Daniel Cryans wrote:
> On Wed, Jul 25, 2012 at 1:34 AM, Alok Kumar wrote:
>> Q 1. Do I need to create table programmatically on Backup cluster every
>> time when new tables
sing WAL?
Your help is highly appreciated
Thanks,
--
Alok Kumar
.
> You could write your own code, but you don't get much gain over existing
> UNIX/Linux tools.
>
>
> On Jul 23, 2012, at 7:52 AM, Amlan Roy wrote:
>
> > Hi,
> >
> >
> >
> > Is it feasible to do disk or tape backup for Hbase tables?
> >
> >
> >
> > I have read about the tools like Export, CopyTable, Distcp. It seems like
> > they will require a separate HDFS cluster to do that.
> >
> >
> >
> > Regards,
> >
> > Amlan
> >
>
>
--
Alok Kumar
hi,
you should try looking into log dir ( $HBASE_HOME/logs ), look for HMaster
logs in "hbase-<$user.name>-master-.log files.
It'll tell you exact region why HBase is not coming up.. or try cleaning
/tmp directory..
cheers,
Alok
On Sun, Jul 22, 2012 at 2:59 PM, Bing Li wrote:
> Dear all,
>
>
not find is that whether it is
> >>> possible to configure the replication in such a way that if my master
> >>> cluster goes down the slave cluster will automatically take its
> >>> place..Need some advice/comments from the experts.Many thanks.
> >>>
> >>> Regards,
> >>> Mohammad Tariq
>
--
Alok Kumar
alueA; or I
> can
> > >use get function to get the rows with rowid in my set all to my client
> and
> > >do the filtering on my client side. But I think neither way is
> efficient.
> > >Can anyone give me a hint on this?
> > >
> > >Thanks
> > >Yixiao
> >
> >
>
--
Alok Kumar
Hi,
you can make use of 'setCaching' method of your scan object.
Eg:
Scan objScan = new Scan();
objScan.setCaching(100); // set it to some integer, as per ur use case.
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCaching(int)
thanks,
Alok
On Fri, May 25, 2012 at
Thanks for pointing about setCacheBlocks() ,
its HBase default value will provide better performance for following
Filters as well as for Kevin's multiple Facet search.
-Alok
On Fri, Apr 20, 2012 at 7:02 AM, Kevin M wrote:
> Thanks for pointing me towards setCacheBlocks() and explaining the
> d
27;t see a filter mechanism that provides this type of functionality on
> the ResultScanner object. I went through the mailing list but I was unable
> to find a post that resembled this idea.
>
> Thanks.
>
--
Alok Kumar
I could not get any other way/test to quickly know about it.. :)
On Sat, Jan 21, 2012 at 7:48 PM, Alok Kumar wrote:
> Hi Lars,
>
> Thanks for reply..
> I wanted to know ... Is Column Values are also cached in Result object
> (ie, less number of calls to Hbase table for values)
king?
>
>
> -- Lars
>
>
>
> ____
> From: Alok Kumar
> To: user@hbase.apache.org
> Sent: Friday, January 20, 2012 12:48 AM
> Subject: Is HBase.Client.Result.getValue(...) and Result.getColumn(...)
> fetch actual value
Hi All,
I like to know if
HBase.Client.Result.getValue(...) and
Result.getColumn(...) fetch actual value from TABLE everytime
or is it available in Result/ResultScanner already?
--
Alok
ng up.ยป
> I don't know what this ipc.HbaseRPC is suppose to do, but I guess it is
> trying to reach the region server.Accessing the master web UI I can see the
> Region Server address at: virtual-notebook-tosh:60030
> Anyone can shed some light about what is happening?
> Thank you.
--
Alok Kumar
21 matches
Mail list logo