HConnection in TIMED_WATING

2018-09-28 Thread Lalit Jadhav
While load testing in the application, In Thread dump, collected when the
application was in its highest utilization and found that hconnection was
in TIMED_WATING (Below log occurred continuously)

hconnection-0x52cf832d-shared--pool1-t110 - priority:5 -
threadId:0x5651030a9800 - nativeId:0x11b - state:TIMED_WAITING
stackTrace:
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0003f8200ca0> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None--

Can anyone explain this what's going wrong here?

Regards,
*Lalit Jadhav,*
*Database Group Lead.*
*Everything happens to everybody sooner or later if there is time enough*


Re: Unable to read from Kerberised HBase

2018-07-12 Thread Lalit Jadhav
Yes, Reid, every machine has specific keytab and corresponding principal.


On Wed, Jul 11, 2018 at 3:29 PM, Reid Chan  wrote:

> Does every machine where hbase client runs has your specific keytab and
> corresponding principal?
>
> From snippet, i can tell that you're using service principal to login
> (with name/hostname@REALM format), and each principal should be different
> due to their different hostname.
>
>
>
> R.C
>
>
>
> ________
> From: Lalit Jadhav 
> Sent: 11 July 2018 17:45:22
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Yes.
>
> On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:
>
> > Does your hbase client run on multiple machines?
> >
> > R.C
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 11 July 2018 14:31:40
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Tried with given snippet,
> >
> > It works when a table placed on single RegionServer. But when Table is
> > distributed across the cluster, I am not able to scan table, Let me know
> if
> > I am going wrong somewhere.
> >
> > On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan 
> wrote:
> >
> > > Try this way:
> > >
> > >
> > > Connection connection = ugi.doAs(new PrivilegedAction() {
> > >
> > > @Override
> > > public Connection run() {
> > >   return ConnectionFactory.createConnection(configuration);
> > > }
> > >   });
> > >
> > >
> > >
> > > R.C
> > >
> > >
> > >
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:35:15
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Code Snipper:
> > >
> > > Configuration configuration = HBaseConfiguration.create();
> > > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > > configuration.set("hbase.master", "MASTER");
> > > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > > configuration.set("hadoop.security.authentication", "kerberos");
> > > configuration.set("hbase.security.authentication", "kerberos");
> > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > configuration.set("hbase.cluster.distributed", "true");
> > > configuration.set("hbase.rpc.protection", "authentication");
> > > configuration.set("hbase.regionserver.kerberos.principal",
> > > "hbase/Principal@realm");
> > > configuration.set("hbase.regionserver.keytab.file",
> > > "/home/developers/Desktop/hbase.service.keytab3");
> > > configuration.set("hbase.master.kerberos.principal",
> > > "hbase/HbasePrincipal@realm");
> > > configuration.set("hbase.master.keytab.file",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > >
> > > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> > >
> > > String principal = System.getProperty("kerberosPrincipal",
> > > "hbase/HbasePrincipal@realm");
> > > String keytabLocation = System.getProperty("kerberosKeytab",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > > UserGroupInformation.setconfiguration(configuration);
> > > UserGroupInformation.loginUserFromKeytab(principal,
> keytabLocation);
> > > UserGroupInformation userGroupInformation = UserGroupInformation.
> > > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > > UserGroupInformation.setLoginUser(userGroupInformation);
> > >
> > >Connection connection =
> > > ConnectionFactory.createConnection(configuration);
> > >
> > >
> > > Any more logs about login failure or success or related? - No, I only
> got
> > > above logs.
> > >
> > >
> > > On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan 
> > wrote:
> > >
> > > > Any more logs about logi

Re: Unable to read from Kerberised HBase

2018-07-11 Thread Lalit Jadhav
Yes.

On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:

> Does your hbase client run on multiple machines?
>
> R.C
>
>
> ________
> From: Lalit Jadhav 
> Sent: 11 July 2018 14:31:40
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Tried with given snippet,
>
> It works when a table placed on single RegionServer. But when Table is
> distributed across the cluster, I am not able to scan table, Let me know if
> I am going wrong somewhere.
>
> On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:
>
> > Try this way:
> >
> >
> > Connection connection = ugi.doAs(new PrivilegedAction() {
> >
> > @Override
> > public Connection run() {
> >   return ConnectionFactory.createConnection(configuration);
> > }
> >   });
> >
> >
> >
> > R.C
> >
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:35:15
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Code Snipper:
> >
> > Configuration configuration = HBaseConfiguration.create();
> > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation = UserGroupInformation.
> > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> >Connection connection =
> > ConnectionFactory.createConnection(configuration);
> >
> >
> > Any more logs about login failure or success or related? - No, I only got
> > above logs.
> >
> >
> > On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan 
> wrote:
> >
> > > Any more logs about login failure or success or related?
> > >
> > > And can you show the code snippet of connection creation?
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:06:32
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Table only contains 100 rows. Still not able to scan.
> > >
> > > On Tue, Jul 10, 2018, 12:21 PM anil gupta 
> wrote:
> > >
> > > > As per error message, your scan ran for more than 1 minute but the
> > > timeout
> > > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > > increasing
> > > > timeout.(PS: HBase is mostly good for short scan not for full table
> > > scans.)
> > > >
> > > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> > lalit.jad...@nciportal.com
> > > >
> > > > wrote:
> > > >
> > > > > While connecting to remote HBase cluster, I can create Table 

Re: Unable to read from Kerberised HBase

2018-07-11 Thread Lalit Jadhav
Tried with given snippet,

It works when a table placed on single RegionServer. But when Table is
distributed across the cluster, I am not able to scan table, Let me know if
I am going wrong somewhere.

On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:

> Try this way:
>
>
> Connection connection = ugi.doAs(new PrivilegedAction() {
>
> @Override
> public Connection run() {
>   return ConnectionFactory.createConnection(configuration);
> }
>   });
>
>
>
> R.C
>
>
>
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:35:15
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Code Snipper:
>
> Configuration configuration = HBaseConfiguration.create();
> configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> configuration.set("hbase.master", "MASTER");
> configuration.set("hbase.zookeeper.property.clientPort", "2181");
> configuration.set("hadoop.security.authentication", "kerberos");
> configuration.set("hbase.security.authentication", "kerberos");
> configuration.set("zookeeper.znode.parent", "/hbase-secure");
> configuration.set("hbase.cluster.distributed", "true");
> configuration.set("hbase.rpc.protection", "authentication");
> configuration.set("hbase.regionserver.kerberos.principal",
> "hbase/Principal@realm");
> configuration.set("hbase.regionserver.keytab.file",
> "/home/developers/Desktop/hbase.service.keytab3");
> configuration.set("hbase.master.kerberos.principal",
> "hbase/HbasePrincipal@realm");
> configuration.set("hbase.master.keytab.file",
> "/etc/security/keytabs/hbase.service.keytab");
>
> System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
>
> String principal = System.getProperty("kerberosPrincipal",
> "hbase/HbasePrincipal@realm");
> String keytabLocation = System.getProperty("kerberosKeytab",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setconfiguration(configuration);
> UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> UserGroupInformation userGroupInformation = UserGroupInformation.
> loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setLoginUser(userGroupInformation);
>
>Connection connection =
> ConnectionFactory.createConnection(configuration);
>
>
> Any more logs about login failure or success or related? - No, I only got
> above logs.
>
>
> On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:
>
> > Any more logs about login failure or success or related?
> >
> > And can you show the code snippet of connection creation?
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:06:32
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Table only contains 100 rows. Still not able to scan.
> >
> > On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
> >
> > > As per error message, your scan ran for more than 1 minute but the
> > timeout
> > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > increasing
> > > timeout.(PS: HBase is mostly good for short scan not for full table
> > scans.)
> > >
> > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > While connecting to remote HBase cluster, I can create Table and get
> > > Table
> > > > Listing.  But unable to scan Table using Java API. Below is code
> > > >
> > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > configuration.set("hbase.master", "MASTER");
> > > > configuration.set("hbase.zookeeper.property.clientPort",
> "2181");
> > > > configuration.set("hadoop.security.authentication", "kerberos");
> > > > configuration.set("hbase.security.authentication", "kerberos");
> > > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > > configuration.set("hbase.c

Re: Unable to read from Kerberised HBase

2018-07-10 Thread Lalit Jadhav
Code Snipper:

Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.quorum",  "QUARAM");
configuration.set("hbase.master", "MASTER");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hadoop.security.authentication", "kerberos");
configuration.set("hbase.security.authentication", "kerberos");
configuration.set("zookeeper.znode.parent", "/hbase-secure");
configuration.set("hbase.cluster.distributed", "true");
configuration.set("hbase.rpc.protection", "authentication");
configuration.set("hbase.regionserver.kerberos.principal",
"hbase/Principal@realm");
configuration.set("hbase.regionserver.keytab.file",
"/home/developers/Desktop/hbase.service.keytab3");
configuration.set("hbase.master.kerberos.principal",
"hbase/HbasePrincipal@realm");
configuration.set("hbase.master.keytab.file",
"/etc/security/keytabs/hbase.service.keytab");

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");

String principal = System.getProperty("kerberosPrincipal",
"hbase/HbasePrincipal@realm");
String keytabLocation = System.getProperty("kerberosKeytab",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setconfiguration(configuration);
UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
UserGroupInformation userGroupInformation = UserGroupInformation.
loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);

   Connection connection =
ConnectionFactory.createConnection(configuration);


Any more logs about login failure or success or related? - No, I only got
above logs.


On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:

> Any more logs about login failure or success or related?
>
> And can you show the code snippet of connection creation?
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:06:32
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Table only contains 100 rows. Still not able to scan.
>
> On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
>
> > As per error message, your scan ran for more than 1 minute but the
> timeout
> > is set for 1 minute. Hence the error. Try doing smaller scans or
> increasing
> > timeout.(PS: HBase is mostly good for short scan not for full table
> scans.)
> >
> > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav  >
> > wrote:
> >
> > > While connecting to remote HBase cluster, I can create Table and get
> > Table
> > > Listing.  But unable to scan Table using Java API. Below is code
> > >
> > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > configuration.set("hbase.master", "MASTER");
> > > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > > configuration.set("hadoop.security.authentication", "kerberos");
> > > configuration.set("hbase.security.authentication", "kerberos");
> > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > configuration.set("hbase.cluster.distributed", "true");
> > > configuration.set("hbase.rpc.protection", "authentication");
> > > configuration.set("hbase.regionserver.kerberos.principal",
> > > "hbase/Principal@realm");
> > > configuration.set("hbase.regionserver.keytab.file",
> > > "/home/developers/Desktop/hbase.service.keytab3");
> > > configuration.set("hbase.master.kerberos.principal",
> > > "hbase/HbasePrincipal@realm");
> > > configuration.set("hbase.master.keytab.file",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > >
> > > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> > >
> > > String principal = System.getProperty("kerberosPrincipal",
> > > "hbase/HbasePrincipal@realm");
> > > String keytabLocation = System.getProperty("kerberosKeytab",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > > User

Re: Unable to read from Kerberised HBase

2018-07-10 Thread Lalit Jadhav
Table only contains 100 rows. Still not able to scan.

On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:

> As per error message, your scan ran for more than 1 minute but the timeout
> is set for 1 minute. Hence the error. Try doing smaller scans or increasing
> timeout.(PS: HBase is mostly good for short scan not for full table scans.)
>
> On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav 
> wrote:
>
> > While connecting to remote HBase cluster, I can create Table and get
> Table
> > Listing.  But unable to scan Table using Java API. Below is code
> >
> > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation =
> > UserGroupInformation.loginUserFromKeytabAndReturnUG
> > I("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> > I am getting bellow errors,
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
> after
> > attempts=36, exceptions: Mon Jul 09 18:45:57 IST 2018, null,
> > java.net.SocketTimeoutException: callTimeout=6, callDuration=64965:
> > row
> > '' on table 'DEMO_TABLE' at
> > region=DEMO_TABLE,,1529819280641.40f0e7dc4159937619da237915be8b11.,
> > hostname=dn1-devup.mstorm.com,60020,1531051433899, seqNum=526190
> >
> > Exception : java.io.IOException: Failed to get result within timeout,
> > timeout=6ms
> >
> >
> > --
> > Regards,
> > Lalit Jadhav
> > Network Component Private Limited.
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


Unable to read from Kerberised HBase

2018-07-09 Thread Lalit Jadhav
While connecting to remote HBase cluster, I can create Table and get Table
Listing.  But unable to scan Table using Java API. Below is code

configuration.set("hbase.zookeeper.quorum", "QUARAM");
configuration.set("hbase.master", "MASTER");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hadoop.security.authentication", "kerberos");
configuration.set("hbase.security.authentication", "kerberos");
configuration.set("zookeeper.znode.parent", "/hbase-secure");
configuration.set("hbase.cluster.distributed", "true");
configuration.set("hbase.rpc.protection", "authentication");
configuration.set("hbase.regionserver.kerberos.principal",
"hbase/Principal@realm");
configuration.set("hbase.regionserver.keytab.file",
"/home/developers/Desktop/hbase.service.keytab3");
configuration.set("hbase.master.kerberos.principal",
"hbase/HbasePrincipal@realm");
configuration.set("hbase.master.keytab.file",
"/etc/security/keytabs/hbase.service.keytab");

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");

String principal = System.getProperty("kerberosPrincipal",
"hbase/HbasePrincipal@realm");
String keytabLocation = System.getProperty("kerberosKeytab",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setconfiguration(configuration);
UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
UserGroupInformation userGroupInformation =
UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);

I am getting bellow errors,

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=36, exceptions: Mon Jul 09 18:45:57 IST 2018, null,
java.net.SocketTimeoutException: callTimeout=6, callDuration=64965: row
'' on table 'DEMO_TABLE' at
region=DEMO_TABLE,,1529819280641.40f0e7dc4159937619da237915be8b11.,
hostname=dn1-devup.mstorm.com,60020,1531051433899, seqNum=526190

Exception : java.io.IOException: Failed to get result within timeout,
timeout=6ms


-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Reading data from hbase table for 2 different user ids

2018-05-10 Thread Lalit Jadhav
Deepak,

You can refer,
http://hbase.apache.org/0.94/book/hbase.accesscontrol.configuration.html

On Thu, May 10, 2018 at 12:44 PM, Deepak Khandelwal <
dkhandelwal@gmail.com> wrote:

> Hi, can someone help on below issue or point me to something that could
> help.
>
> I have a table in hbase from which I am trying to read data based on row
> key.
> Now this data will be read from an API. I need to read data for 2 different
> users - user1 and user 2. User1 which has permission on table should be
> able to read data wherein user 2 call can fail if he doesn't have
> permission to read data on that table.
>
> Can someone tell how can I programmatically read data from hbase table for
> 2 different user ids?
>
> Any help would be much appreciated.
>
> Thanks
> Deepak
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Avoiding duplicate writes

2018-01-11 Thread Lalit Jadhav
Hello Peter,

You can add a Random number in Row key for avoiding Rowkey overriding.
Even though timeStamp at one ms is same Random Number provides uniqueness.

On Thu, Jan 11, 2018 at 3:46 PM, Peter Marron <petermar...@discover.com>
wrote:

> Hi,
>
> We have a problem when we are writing lots of records to HBase.
> We are not specifying timestamps explicitly and so the situation arises
> where multiple records are being written in the same millisecond.
> Unfortunately when the records are written and the timestamps are the same
> then later writes are treated as updates of the previous records and not
> separate records, which is what we want.
> So we want to be able to guarantee that records are not treated as
> overwrites (unless we explicitly make them so).
>
> As I understand it there are (at least) two different ways to proceed.
>
> The first approach is to increase the resolution of the timestamp.
> So we could use something like java.lang.System.nanoTime()
> However although this seems to ameliorate the problem it seems to
> introduce other problems.
> Also ideally we would like something that guarantees that we don't lose
> writes rather than making them more unlikely.
>
> The second approach is to write a prePut co-processor.
> In the prePut I can do a read using the same rowkey, column family and
> column qualifier and omit the timestamp.
> As I understand it this will return me the latest timestamp.
> Then I can update the timestamp that I am going to write, if necessary, to
> make sure that the timestamp is always unique.
> In this way I can guarantee that none of my writes are accidentally turned
> into updates.
>
> However this approach seems to be expensive.
> I have to do a read before each write, and although (I believe) it will be
> on the same region server, it's still going to slow things down a lot.
> Also I am assuming that the prePut co-processor is executed inside a
> record lock so that I don't have to worry about synchronization.
> Is this true?
>
> Is there a better way?
>
> Maybe there is some implementation of this already that I can pick up?
>
> Maybe there is some way that I can implement this more efficiently?
>
> It seems to me that this might be better handled at compaction.
> Shouldn't there be some way that I can mark writes with some sort of
> special value of timestamp that means that this write should never be
> considered as an update but always as a separate write?
>
> Any advice gratefully received.
>
> Peter Marron
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Regionservers consuming too much ram in HDP 2.6.

2017-12-06 Thread Lalit Jadhav
Hello,

Yes, it is an interesting case.
Ram, Sorry I cannot share whole details, but let me know params you need.


On Thu, Dec 7, 2017 at 11:58 AM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> I don have any idea now from this. Seems to be an interesting case since
> none of the users have reported such an issue. I need to try out this to
> ascertain what is really happening.
>
> Can you paste your hbase-env.sh details and the hbase-site.xml just to be
> sure there is nothing overriding the configs?
>
> Regards
> Ram
>
>
> On Wed, Dec 6, 2017 at 5:41 PM, Lalit Jadhav <lalit.jad...@nciportal.com>
> wrote:
>
> > Are you sure the 16G taken up in the RAM is due to the region server?
> > :Yes
> >
> > Are you having any other cache configuration like  bucket cache?
> > :No
> >
> > Are you allocating any Direct_memory for the region servers?
> > :No
> >
> > So when does this raise to 16G happen - is it after the regions are
> created
> > or even before them i.e just when you start the region server?
> > : Even before regions are created.
> >
> > Which version of hbase is it?
> > 1.1.2
> >
> > On Dec 6, 2017 4:02 PM, "ramkrishna vasudevan" <
> > ramkrishna.s.vasude...@gmail.com> wrote:
> >
> > > Few more questions,
> > > Are you sure the 16G taken up in the RAM is due to the region server?
> Are
> > > you having any other cache configuration like  bucket cache?
> > > Are you allocating any Direct_memory for the region servers?
> > >
> > > So when does this raise to 16G happen - is it after the regions are
> > created
> > > or even before them i.e just when you start the region server? Which
> > > version of hbase is it?
> > >
> > > Regards
> > > Ram
> > >
> > > On Wed, Dec 6, 2017 at 3:57 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > Hi Ramkrishna,
> > > >
> > > > Thanks for reply,
> > > > Right now I am not performing any operation on HBase(it is idle),
> > Still,
> > > > utilization is 16GB. But when I shut down them, It frees this memory.
> > > >
> > > > On Wed, Dec 6, 2017 at 2:41 PM, ramkrishna vasudevan <
> > > > ramkrishna.s.vasude...@gmail.com> wrote:
> > > >
> > > > > Hi Lalith
> > > > >
> > > > > Seems you have configured very minimum heap space.
> > > > > So when you say RAM size is increasing  what are the operations
> that
> > > you
> > > > > are performing when the memory increases. I can see heap is only 1G
> > but
> > > > > still your  memory is 16G.Are you sure that 16G is only due to
> Region
> > > > > servers? Are you having heavy writes or reads during that time.
> > > > >
> > > > > Regards
> > > > > Ram
> > > > >
> > > > > On Wed, Dec 6, 2017 at 2:31 PM, Lalit Jadhav <
> > > lalit.jad...@nciportal.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Adding More info,
> > > > > >
> > > > > > Regions per region server : 13-15 Regions per RS
> > > > > >
> > > > > > memory region server is taking : 16GB
> > > > > >
> > > > > > Memstore size : 64 MB
> > > > > >
> > > > > > configured heap for Master : 1 GB
> > > > > >
> > > > > > configured heap for RegionServer : 1 GB
> > > > > >
> > > > > > On Tue, Dec 5, 2017 at 5:40 PM, Lalit Jadhav <
> > > > lalit.jad...@nciportal.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello All,
> > > > > > >
> > > > > > > When we do any operations on the database(HBase), RegionServers
> > are
> > > > > > taking
> > > > > > > too much ram and not releases until we restart them. Is there
> any
> > > > > > parameter
> > > > > > > or property to release ram or to restrict RegionServers to take
> > > this
> > > > > much
> > > > > > > of memory? Help will be appreciated.
> > > > > > >
> > > > > > > --
> > > > > > > Regards,
> > > > > > > Lalit Jadhav
> > > > > > > Network Component Private Limited.
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Regards,
> > > > > > Lalit Jadhav
> > > > > > Network Component Private Limited.
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Lalit Jadhav
> > > > Network Component Private Limited.
> > > >
> > >
> >
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Regionservers consuming too much ram in HDP 2.6.

2017-12-06 Thread Lalit Jadhav
Are you sure the 16G taken up in the RAM is due to the region server?
:Yes

Are you having any other cache configuration like  bucket cache?
:No

Are you allocating any Direct_memory for the region servers?
:No

So when does this raise to 16G happen - is it after the regions are created
or even before them i.e just when you start the region server?
: Even before regions are created.

Which version of hbase is it?
1.1.2

On Dec 6, 2017 4:02 PM, "ramkrishna vasudevan" <
ramkrishna.s.vasude...@gmail.com> wrote:

> Few more questions,
> Are you sure the 16G taken up in the RAM is due to the region server? Are
> you having any other cache configuration like  bucket cache?
> Are you allocating any Direct_memory for the region servers?
>
> So when does this raise to 16G happen - is it after the regions are created
> or even before them i.e just when you start the region server? Which
> version of hbase is it?
>
> Regards
> Ram
>
> On Wed, Dec 6, 2017 at 3:57 PM, Lalit Jadhav <lalit.jad...@nciportal.com>
> wrote:
>
> > Hi Ramkrishna,
> >
> > Thanks for reply,
> > Right now I am not performing any operation on HBase(it is idle), Still,
> > utilization is 16GB. But when I shut down them, It frees this memory.
> >
> > On Wed, Dec 6, 2017 at 2:41 PM, ramkrishna vasudevan <
> > ramkrishna.s.vasude...@gmail.com> wrote:
> >
> > > Hi Lalith
> > >
> > > Seems you have configured very minimum heap space.
> > > So when you say RAM size is increasing  what are the operations that
> you
> > > are performing when the memory increases. I can see heap is only 1G but
> > > still your  memory is 16G.Are you sure that 16G is only due to Region
> > > servers? Are you having heavy writes or reads during that time.
> > >
> > > Regards
> > > Ram
> > >
> > > On Wed, Dec 6, 2017 at 2:31 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > Adding More info,
> > > >
> > > > Regions per region server : 13-15 Regions per RS
> > > >
> > > > memory region server is taking : 16GB
> > > >
> > > > Memstore size : 64 MB
> > > >
> > > > configured heap for Master : 1 GB
> > > >
> > > > configured heap for RegionServer : 1 GB
> > > >
> > > > On Tue, Dec 5, 2017 at 5:40 PM, Lalit Jadhav <
> > lalit.jad...@nciportal.com
> > > >
> > > > wrote:
> > > >
> > > > > Hello All,
> > > > >
> > > > > When we do any operations on the database(HBase), RegionServers are
> > > > taking
> > > > > too much ram and not releases until we restart them. Is there any
> > > > parameter
> > > > > or property to release ram or to restrict RegionServers to take
> this
> > > much
> > > > > of memory? Help will be appreciated.
> > > > >
> > > > > --
> > > > > Regards,
> > > > > Lalit Jadhav
> > > > > Network Component Private Limited.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Lalit Jadhav
> > > > Network Component Private Limited.
> > > >
> > >
> >
> >
> >
> > --
> > Regards,
> > Lalit Jadhav
> > Network Component Private Limited.
> >
>


Re: Regionservers consuming too much ram in HDP 2.6.

2017-12-06 Thread Lalit Jadhav
Hi Ramkrishna,

Thanks for reply,
Right now I am not performing any operation on HBase(it is idle), Still,
utilization is 16GB. But when I shut down them, It frees this memory.

On Wed, Dec 6, 2017 at 2:41 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> Hi Lalith
>
> Seems you have configured very minimum heap space.
> So when you say RAM size is increasing  what are the operations that you
> are performing when the memory increases. I can see heap is only 1G but
> still your  memory is 16G.Are you sure that 16G is only due to Region
> servers? Are you having heavy writes or reads during that time.
>
> Regards
> Ram
>
> On Wed, Dec 6, 2017 at 2:31 PM, Lalit Jadhav <lalit.jad...@nciportal.com>
> wrote:
>
> > Adding More info,
> >
> > Regions per region server : 13-15 Regions per RS
> >
> > memory region server is taking : 16GB
> >
> > Memstore size : 64 MB
> >
> > configured heap for Master : 1 GB
> >
> > configured heap for RegionServer : 1 GB
> >
> > On Tue, Dec 5, 2017 at 5:40 PM, Lalit Jadhav <lalit.jad...@nciportal.com
> >
> > wrote:
> >
> > > Hello All,
> > >
> > > When we do any operations on the database(HBase), RegionServers are
> > taking
> > > too much ram and not releases until we restart them. Is there any
> > parameter
> > > or property to release ram or to restrict RegionServers to take this
> much
> > > of memory? Help will be appreciated.
> > >
> > > --
> > > Regards,
> > > Lalit Jadhav
> > > Network Component Private Limited.
> > >
> >
> >
> >
> > --
> > Regards,
> > Lalit Jadhav
> > Network Component Private Limited.
> >
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Regionservers consuming too much ram in HDP 2.6.

2017-12-06 Thread Lalit Jadhav
Adding More info,

Regions per region server : 13-15 Regions per RS

memory region server is taking : 16GB

Memstore size : 64 MB

configured heap for Master : 1 GB

configured heap for RegionServer : 1 GB

On Tue, Dec 5, 2017 at 5:40 PM, Lalit Jadhav <lalit.jad...@nciportal.com>
wrote:

> Hello All,
>
> When we do any operations on the database(HBase), RegionServers are taking
> too much ram and not releases until we restart them. Is there any parameter
> or property to release ram or to restrict RegionServers to take this much
> of memory? Help will be appreciated.
>
> --
> Regards,
> Lalit Jadhav
> Network Component Private Limited.
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Regionservers consuming too much ram in HDP 2.6.

2017-12-05 Thread Lalit Jadhav
Hello All,

When we do any operations on the database(HBase), RegionServers are taking
too much ram and not releases until we restart them. Is there any parameter
or property to release ram or to restrict RegionServers to take this much
of memory? Help will be appreciated.

-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Configuring HBASE in HDP with version HBASE-1.2.6

2017-09-19 Thread Lalit Jadhav
Hi,

 I am using *HDP-2.4* with *HBASE-1.1.2*. My question is

1. Can I configure *HBASE-1.2.6* in HDP also can I use Ambari UI for
monitoring.
2. If not I need a UI monitor. Please Suggest any.

-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Need help with Row Key design

2017-08-30 Thread Lalit Jadhav
Hi,

 If your data is user specific you can use userId in rowkey. Also if
you have some time specific data you can use timestamp. Remember time stamp
makes rowkey unique.

On Aug 30, 2017 9:22 PM, "deepaksharma25"  wrote:

> Hello,
> I am new to HBase DB and currently evaluating it for one of the requirement
> we have from Customer.
> We are going to write TBs of data in HBase daily and we need to fetch
> specifc data based on filter.
>
> I came to know that it is very important to design the row key in such a
> manner, so that it effectively uses it to fetch the data from the specific
> node instead of scanning thru all the records in the database, based on the
> type of row key we design.
>
> The problem with our requirement is that, we don't have any specific field
> which can be used to define the rowkey. We have around 7-8 fields available
> on the frontend, which can be used to filter the records from HBase.
>
> Can you please suggest, what should be the design of my row key, which will
> help in faster retrieval of the data from TBs of data?
> Attaching here the sample screen I am referring in this
> 
> .
>
> Thanks,
> Deepak Sharma
>
>
>
> --
> Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-
> f4020416.html
>


Re: Difference in hbase shell and java API

2017-08-29 Thread Lalit Jadhav
Ted,

I am using HBase-1.1.2 Version. Yes, It produced with small sample data

Sean,

I am just using Bytes.toString(byte[]) to convert byte array to
stringified later converted to JSONObject.

On Wed, Aug 30, 2017 at 5:12 AM, Sean Busbey <bus...@apache.org> wrote:

> what are you using to decode / inspect the JSONObject in each case? or
> are you just looking at the bytes for the string representation?
>
> On Tue, Aug 29, 2017 at 3:34 AM, Lalit Jadhav
> <lalit.jad...@nciportal.com> wrote:
> > Thank you for responding,
> >
> > No that what I meant, I am storing a JSONObject in Value. The row I scan
> in
> > Shell shows different JSONOject than one I got in Scan of Java API.
> >
> >
> > On Tue, Aug 29, 2017 at 9:38 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> >> Typo: If the value-filter on scan was not applied in shell command
> >>
> >> On Mon, Aug 28, 2017 at 9:01 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >>
> >> > If the value-filter on scan was applied in shell command, that
> explains
> >> > the difference.
> >> >
> >> > Cheers
> >> >
> >> > On Mon, Aug 28, 2017 at 8:33 PM, Lalit Jadhav <
> >> lalit.jad...@nciportal.com>
> >> > wrote:
> >> >
> >> >> When I query on *hbase shell *and try to scan similar records using
> >> *Java
> >> >> API* I get different results.
> >> >>
> >> >> But again when I restart HBase then both the results matches. Can any
> >> body
> >> >> explain why do I get different result for same rowkey.
> >> >>
> >> >> *Note* : I have applied some Value-filter on scan in Java API. If I
> >> remove
> >> >> this filter then also both the results matches.
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Lalit Jadhav
> >> >> Network Component Private Limited.
> >> >>
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > Regards,
> > Lalit Jadhav
> > Network Component Private Limited.
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Re: Difference in hbase shell and java API

2017-08-29 Thread Lalit Jadhav
Thank you for responding,

No that what I meant, I am storing a JSONObject in Value. The row I scan in
Shell shows different JSONOject than one I got in Scan of Java API.


On Tue, Aug 29, 2017 at 9:38 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Typo: If the value-filter on scan was not applied in shell command
>
> On Mon, Aug 28, 2017 at 9:01 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > If the value-filter on scan was applied in shell command, that explains
> > the difference.
> >
> > Cheers
> >
> > On Mon, Aug 28, 2017 at 8:33 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com>
> > wrote:
> >
> >> When I query on *hbase shell *and try to scan similar records using
> *Java
> >> API* I get different results.
> >>
> >> But again when I restart HBase then both the results matches. Can any
> body
> >> explain why do I get different result for same rowkey.
> >>
> >> *Note* : I have applied some Value-filter on scan in Java API. If I
> remove
> >> this filter then also both the results matches.
> >>
> >> --
> >> Regards,
> >> Lalit Jadhav
> >> Network Component Private Limited.
> >>
> >
> >
>



-- 
Regards,
Lalit Jadhav
Network Component Private Limited.


Difference in hbase shell and java API

2017-08-28 Thread Lalit Jadhav
When I query on *hbase shell *and try to scan similar records using *Java
API* I get different results.

But again when I restart HBase then both the results matches. Can any body
explain why do I get different result for same rowkey.

*Note* : I have applied some Value-filter on scan in Java API. If I remove
this filter then also both the results matches.

-- 
Regards,
Lalit Jadhav
Network Component Private Limited.