Sometimes we may change time zone for our produce, Hbase will can't work well.
can we add a switch that uses UTC time?
Hi all,
I am getting the below error when trying to create a new HTable object (see
end of email)...
I believe the error is occuring in the below function because the
getClassLoader() function is returning null. Because the way my application
is set up, I have to add all hbase, hadoop, and zookee
Hi All,
The following is a session in hbase shell.
You can see that VERSIONS = 1, yet even after a triggering a major compaction I
can still retrieve version 2, 3, and 4, when querying by timestamp, even after
I triggered a major compaction.
Does "major_compact" from the shell not trigger a maj
> It's part of the mindset shift you have to go through coming from a database
>world to a NoSQL world
This is useful. If you have more insights like this Ian and care to share them,
I think we would be really interested to hear them.
Best regards,
- Andy
Problems worthy of attack prove
and in the gc.log of the region server we get CMS failures that cause full
gc (that fails to free memory):
11867.254: [Full GC 11867.254: [CMS: 3712638K->3712638K(3712640K), 4.7779250
secs] 4032614K->4032392K(4057664K), [CMS Perm : 20062K->19883K(33548K)]
icms_dc=100 , 4.7780440 secs] [Times: user
Hi,
cluster details:
hbase 0.90.2. 10 machines. 1GB switch.
use-case
M/R job that inserts about 10 million rows to hbase in the reducer, followed
by M/R that works with hdfs files.
When the first job maps finish the second job maps starts and region server
crushes.
please note, that when running
"I am slightly confused now. Time to live is used in networking , after n
hops drop this packet. Also used I'm memcache , expire this data n seconds
after insert. I do not know of any specific ttl features in rdbms so I do not
understand
why someone would expect ttl to he permanently durable."
E
On Sunday, August 14, 2011, Ian Varley wrote:
>> "I don't think anyone is well served by that kind of shallow analysis."
>
>
> You're right, Andy; sorry if it came off sounding flip. My point was
simply that the idea of a persistent data store with a configuration setting
that makes the most curre
I wouldn't do it... Some of the other committers can comment more on
this, but there is state cached in HTable instances when scanning.
E.g.,...
protected class ClientScanner implements ResultScanner {
private final Log CLIENT_LOG = LogFactory.getLog(this.getClass());
// HEADSUP: The
> "I don't think anyone is well served by that kind of shallow analysis."
You're right, Andy; sorry if it came off sounding flip. My point was simply
that the idea of a persistent data store with a configuration setting that
makes the most current version of your data disappear without an expli
>From the javadoc of HTable:
"This class is not thread safe for updates; the underlying write buffer
can be corrupted if multiple threads contend over a single HTable instance."
Does that mean HTable is thread safe if we only use it to get rows?
Thanks,
Yi
Check out the setting user limits section at
https://ccp.cloudera.com/display/CDHDOC/HBase+Installation#HBaseInstallation-SettingUserLimitsforHBase
.
On Aug 14, 2011 11:56 AM, "Mayuresh" wrote:
> I was able to solve the issue by setting the appropriate system limits in
> the system files. I dont r
12 matches
Mail list logo