thanks for the reply.
On Wed, Sep 2, 2015 at 12:48 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> copytable will start a MR job and will do the copy in parallele, which is
> good. But it's still going to do a lot of puts on the destination cluster
> which will trigger flushs and comp
OK, from beginning
1. RegionTooBusy is thrown when Memstore size exceeds region flush size X
flush multiplier. THIS is a sign of a great imbalance on a write path -
some regions are much hotter than other or compaction can not keep up
with load , you hit blocking store count and flushes get d
Ralph,
Couple of questions.
Do you have phoenix stats enabled?
Can you send us a stacktrace of RegionTooBusy exception? Looking at HBase
code it is thrown in a few places. Would be good to check where the
resource crunch is occurring at.
On Tue, Sep 1, 2015 at 2:26 PM, Perko, Ralph J wrote:
Ok, I hit this one:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.2/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdpch_relnotes-hdp-2.1.1-knownissues-phoenix.html
problem is solved.
2015-09-01 23:51 GMT+02:00 Serega Sheypak :
> Hm... If I pass quorum as:
> node01:2181,node04:2181,node05
>
Hm... If I pass quorum as:
node01:2181,node04:2181,node05
Exception starts to look like something valid, but still no success.
2015-09-02 00:50:25,472 ERROR [catalina-exec-73] r.d.p.j.TableCreator
[TableCreator.java:43] Error while creating table
java.sql.SQLException: ERROR 102 (08001): Malforme
Hi, I wrote ninja-application (ninjaframework.org) with phenix. I used my
custom testing utility to test my app. When I deployd my app to server, I
got exception:
java.sql.SQLException: No suitable driver found for
jdbc:phoenix:node01,node04,node05:2181
at java.sql.DriverManager.getConnection(Dri
Thanks, I'll try.
it's tempale query, it works 100% through JDBC
2015-09-01 23:26 GMT+02:00 Michael McAllister :
> I think you need a comma between your column definition and your
> constraint definition.
>
>
> On Sep 1, 2015, at 2:54 PM, Serega Sheypak
> wrote:
>
> Hi, I wrote itegration test
Hi I have run into an issue several times now and could really use some help
diagnosing the problem.
Environment:
phoenix 4.4
hbase 0.98
34 node cluster
Tables are defined with 40 salt buckets
We are continuously loading large, bz2, csv files into Phoenix via Pig.
The data is in the hundred of TB
I think you need a comma between your column definition and your constraint
definition.
On Sep 1, 2015, at 2:54 PM, Serega Sheypak
mailto:serega.shey...@gmail.com>> wrote:
Hi, I wrote itegration test that uses HBasetesting utility and phoenix. Test
creates table and inserts data. It works fin
Hi, I wrote itegration test that uses HBasetesting utility and phoenix.
Test creates table and inserts data. It works fine.
I'm trying to run
CREATE TABLE IF NOT EXISTS cross_id_attributes
(
crossIdVARCHAR NOT NULL
CONSTRAINT cross_id_reference_pk PRIMARY KEY (crossId)
)SALT_BUCKE
copytable will start a MR job and will do the copy in parallele, which is
good. But it's still going to do a lot of puts on the destination cluster
which will trigger flushs and compactions. If it's easy for you to send
your csv file there I think it will be more efficient, even if copyTbale
can so
In this will copyTable command of hbase is good to use or transfer the csv
file on the other side and bulkload from there which one is good according
to performance
On Wed, Sep 2, 2015 at 12:23 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Gaurav,
>
> bulk load bypass the WAL, th
Hi Gaurav,
bulk load bypass the WAL, that's correct. It's true for Phoenix, it's true
for HBase (outside of Phoenix).
If you have replication activated, you will have to bulkload the data into
the 2 clusters. Transfert your csv files on the other side too and bulkload
from there.
JM
2015-09-01
Hello
We are using phoenix Map reduce CSV uploader to load data into HBASe . I
read documentation on Phoenix site, it will only create HFLE no WAL logs
will be created.Please confirm understanding is correct or wrong
We have to use HBASe replication across cluster for Master Master scenario.
Will
Hi Satish,
This was reported and fixed as part of
https://issues.apache.org/jira/browse/PHOENIX-2181. For a quick turnaround,
you can do this.
STORE c into 'hbase://checks/enterprise_id,business_date' using
org.apache.phoenix.pig.PhoenixHBaseStorage('zkquorum','-batchSize 5000'
);
Prim
Good Morning,
I am using Phoenix 4.2.2 with Hbase 0.98 on Horton Works HDP 2.2.
My Phoenix table looks like this
CREATE TABLE checks(hashed_key varchar(32), enterprise_id bigint not
null, business_date
date not null, location_id bigintnot null, cost_center_code varchar,
cost_center_name
varchar,
Found it.
Never knew all those helpful metrics were there in the GUI!
Looks like we are right at the 2x threshold on our spilled records vs map
output records ratio.
I will play with this this week.
Thanks again!
-Original Message-
From: Gabriel Reid [mailto:gabriel.r...@gmail.com]
S
On Tue, Sep 1, 2015 at 11:29 AM, Riesland, Zack
wrote:
> You say I can find information about spills in the job counters. Are you
> talking about “failed” map tasks, or is there something else that will help
> me identify spill scenarios?
"Spilled records" is a counter that is available at the jo
Thanks Gabriel,
That is extremely helpful.
One clarification:
You say I can find information about spills in the job counters. Are you
talking about “failed” map tasks, or is there something else that will help me
identify spill scenarios?
From: Gabriel Reid [mailto:gabriel.r...@gmail.com]
Se
Hi,
I thought I should have explained my use case after I sent the email. This
is not for the case where your data is already in CSV format rather than if
your application has a choice of writing to HBase or dumping the records to
CSV and bulk loading the resulting CSV files. In my case, my applic
Hello,
We are now using CDH 4.5.0, so update HBase to 0.98+ may take a little more
time.
Thanks,
Chunhui
2015-09-01 14:54 GMT+08:00 James Taylor :
> Hello,
> Both 2.2.3 and 3.3.1 are no longer supported. Would it be possible for you
> to move on to our 4.x code line on top of HBase 0.98?
> Thank
On Tue, Sep 1, 2015 at 3:04 AM, Behdad Forghani wrote:
> In my experience the fastest way to load data is directly write to HFile. I
> have measured a performance gain of 10x. Also, if you have binary data or
> need to escape characters HBase bulk loader does not escape characters. For
> my use
22 matches
Mail list logo