you give the error you encountered ?
>
> If you look at the example from:
>
> http://hbase.apache.org/book.html#_put_2
>
> there is no delimiter: family and qualifier are two parameters to add()
> method.
>
> On Tue, Sep 19, 2017 at 6:10 AM, Sachin Jain <sachinjain...@gma
Hi,
I am using hbase in a system which does not allow using colon between
column name and column family. Is there any configuration where we an
provide hbase to use underscore (_) as delimiter instead of colon (:)
between name and family ?
Thanks
-Sachin
> >
> > flushedCellsCount
> > flushedCellsSize
> >
> > FlushMemstoreSize_num_ops
> >
> > For Q2, there is no client side support for knowing where the data comes
> > from.
> >
> > On Wed, Jun 28, 2017 at 8:15 PM, Sachin Jain <sachinjain...@g
Hi,
I have used TableInputFormat and newAPIHadoopRDD defined on sparkContext to
do a full table scan and get an rdd from it.
Partial piece of code looks like this:
sparkContext.newAPIHadoopRDD(
it.
Jerry
On Mon, Jun 12, 2017 at 9:35 PM, Sachin Jain <sachinjain...@gmail.com>
wrote:
> Thanks Allan,
>
> This is what I understood initially that further calls will be serial if a
> request is already pending on some RS. I am running hbase 1.3.1
> Is "hbase.client
there is only one socket to each RS, and the calls written to this
> socket are synchronized(or queued using another thread called CallSender ).
> But usually, this won't become a bottleneck. If this is a problem for you,
> you can tune "hbase.client.ipc.pool.size".
>
>
.
On 12-Jun-2017 7:31 PM, "Allan Yang" <allan...@apache.org> wrote:
Connection is thread safe. You can use it across different threads. And
requests made by different thread are handled in parallel no matter the
keys are in the same region or not.
2017-06-12 20:44 GMT+08:00 Sachin
Just to add @Ted Yu's answer, you can confirm this by looking at your
HMaster UI and see the regions and their boundaries.
On Tue, Jun 6, 2017 at 3:50 PM, Ted Yu wrote:
> Looks like your table has only one region.
>
> > On Jun 6, 2017, at 3:14 AM, Rajeshkumar J
when data ingestion continues but flush delayed, the memstore size might
> exceed the upper limit thus throw RegionTooBusyException
>
> Hope these information helps.
>
> Best Regards,
> Yu
>
> On 6 June 2017 at 13:39, Sachin Jain <sachinjain...@gmail.com> wrote:
>
> > Hi,
>
ot preserved anymore in HBase
> > 2. if you miscalculate your keyspace size by a lot, you are stuck with
> the
> > hash function and range you selected even if you later get more regions
> > unless you're willing to do complete migration to a new table
> >
> > Hope
.
[0]: http://hbase.apache.org/book.html#schema.versions
On Tue, Nov 29, 2016 at 4:07 PM, Sachin Jain <sachinjain...@gmail.com>
wrote:
> Hi,
>
> I am curious to understand the impact of having large number of versions
> in HBase. Suppose I want to maintain previous 100 versi
Hi,
I am curious to understand the impact of having large number of versions in
HBase. Suppose I want to maintain previous 100 versions for a row/cell.
My thoughts are:-
Having large number of versions means more number of HFiles
More number of HFiles can increase lookup time of a rowKey.
gt; On Mon, Nov 28, 2016 at 12:42 AM, Sachin Jain <sachinjain...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I was going though pre-splitting a table article [0] and it is mentioned
> > that it is generally best practice to presplit your table. But don't we
> >
Hi,
I was going though pre-splitting a table article [0] and it is mentioned
that it is generally best practice to presplit your table. But don't we
need to know the data in advance in order to presplit it.
Question: What should be the best practice when we don't know what data is
going to be
about this in
> https://issues.apache.org/jira/browse/HBASE-16973 recently, you can get
> more details there.
>
> Small world, isn't it? (Smile)
>
> Best Regards,
> Yu
>
> On 1 November 2016 at 13:10, Sachin Jain <sachinjain...@gmail.com> wrote:
>
> > Hi,
>
Hi,
I am using HBase v1.1.2. I have few questions regarding full table scan:-
1. When we instantiate a Scanner and do not set any caching on it. What is
the value it picks by default.
- By looking at the code, I have found the following:
>From documentation on the top in Scan.java class
* To
If you take my code then it should work. I have tested it on Hbase 1.2.1.
On Aug 29, 2016 12:21 PM, "spats" wrote:
> Thanks Sachin.
>
> So it won't work with hbase 1.2.0 even if we use your code from shc branch?
>
>
>
>
> --
> View this message in context:
Hi Sudhir,
There is connection leak problem with hortonworks hbase connector if you
use hbase 1.2.0.
I tried to use hortonwork's connector and felt into the same problem.
Have a look at this Hbase issue HBASE-16017 [0]. The fix for this was
backported to 1.3.0, 1.4.0 and 2.0.0
I have raised a
n {
>
> FYI
>
> On Wed, Jul 20, 2016 at 11:28 PM, Sachin Jain <sachinjain...@gmail.com>
> wrote:
>
> > *Context*
> > I am using Spark (1.5.1) with HBase (1.1.2) to dump the output of Spark
> > Jobs into HBase which will be further available as lookups from
*Context*
I am using Spark (1.5.1) with HBase (1.1.2) to dump the output of Spark
Jobs into HBase which will be further available as lookups from HBase
Table. BaseRelation extends HadoopFSRelation and is used to read and write
to HBase. Spark Default Source API is used.
*Use Case*
Now, whenever I
20 matches
Mail list logo