I think this issue still exists, and more help would be appreciated.
Unfortunately, this issue is still happening, I have updated all the
hadoop_env.sh on the workers, and I'm quite certain the hbase-site.xml has
been load up correctly.
but the timeout still saying 6 which is not the number we
Thanks for all your help
I think I have found out the issue that all over mappers have
HADOOP_CLASSPATH overwritten in hadoop_env.sh :(
On Wed, Feb 20, 2019 at 4:50 PM Xiaoxiao Wang wrote:
> Pedro
>
> to answer your question, I have verified that the configuration file has
> loaded correctly fr
Pedro
to answer your question, I have verified that the configuration file has
loaded correctly from the classpath,
I have 300+ mappers try to make connections to the db at the same time, and
it still gives me the same error that timeout at after 6 ms
On Wed, Feb 20, 2019 at 4:45 PM Xiaoxiao
Since I've known that the configuration have been loaded up correctly
through the classpath
I have tested on the real application, however, it still timed out with the
same default value from the mappers
Error: java.io.IOException:
org.apache.phoenix.exception.PhoenixIOException: Failed after att
i made this work on my toy application, getConf() is not an issue, and
hbase conf can get the correct settings
I'm trying out again on the real application
On Wed, Feb 20, 2019 at 4:13 PM William Shen
wrote:
> Whatever is in super.getConf() should get overriden by hbase-site.xml
> because addHb
Whatever is in super.getConf() should get overriden by hbase-site.xml
because addHbaseResources because will layer on hbase-site.xml last. The
question is which one got picked up... (maybe there is another one on the
classpath, is that possible?)
On Wed, Feb 20, 2019 at 4:10 PM Xiaoxiao Wang
wrot
I'm trying out on the mapreduce application, I made it work on my toy
application
On Wed, Feb 20, 2019 at 4:09 PM William Shen
wrote:
> A bit of a long shot, but do you happen to have another hbase-site.xml
> bundled in your jar accidentally that might be overriding what is on the
> classpath?
>
I don't think you need an HBaseConfiguration at all for passing the
settings to Phoenix. hbase-site.xml is loaded by default if in classpath.
Or, if needed, just do a HBaseConfiguration.create() so it loads default
hbase file locations.
Probably, that super.getConf() is the problem.
In any case
A bit of a long shot, but do you happen to have another hbase-site.xml
bundled in your jar accidentally that might be overriding what is on the
classpath?
On Wed, Feb 20, 2019 at 3:58 PM Xiaoxiao Wang
wrote:
> A bit more information, I feel the classpath didn't get passed in correctly
> by doing
A bit more information, I feel the classpath didn't get passed in correctly
by doing
conf = HBaseConfiguration.addHbaseResources(super.getConf());
and this conf also didn't pick up the expected properties
On Wed, Feb 20, 2019 at 3:56 PM Xiaoxiao Wang wrote:
> Pedro
>
> thanks for your info, y
Pedro
thanks for your info, yes, I have tried both
HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml and
HADOOP_CLASSPATH=/etc/hbase/conf/ (without file), and yes checked
hadoop-env.sh as well to make sure it did
HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/others
And also for your second question, it is in
Your classpath variable should be pointing to the folder containing your
hbase-site.xml and not directly to the file.
But certain distributions tend to override that envvar inside hadoop-env.sh
or hadoop.sh .
Out of curiosity, have you written a map-reduce application and are you
querying phoenix
HI Pedro
thanks for your help, I think we know that we need to set the classpath to
the hadoop program, and what we tried was
HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml hadoop jar $test_jar but it
didn't work
So we are wondering if anything we did wrong?
On Wed, Feb 20, 2019 at 3:24 PM Pedro
Hi,
How many concurrent client connections are we talking about? You might be
opening more connections than the RS can handle ( under these circumstances
most of the client threads would end exhausting their retry count ) . I
would bet that you've get a bottleneck in the RS keeping SYSTEM.CATALOG
Yes
I have tried this
HADOOP_CLASSPATH=/etc/hbase/conf/ hadoop jar $test_jar
On Wed, Feb 20, 2019 at 3:19 PM William Shen
wrote:
> Hi Xiaoxiao,
>
> Have you tried including hbase-site.xml in your conf on classpath?
>
> Will
>
> On Wed, Feb 20, 2019 at 2:50 PM Xiaoxiao Wang
> wrote:
>
> > Hi, wh
Hi Xiaoxiao,
Have you tried including hbase-site.xml in your conf on classpath?
Will
On Wed, Feb 20, 2019 at 2:50 PM Xiaoxiao Wang
wrote:
> Hi, who may help
>
> We are running a Hadoop application that needs to use phoenix JDBC
> connection from the workers.
> The connection works, but when to
Hi, who may help
We are running a Hadoop application that needs to use phoenix JDBC connection
from the workers.
The connection works, but when too many connection established at the same
time, it throws RPC timeouts
Error: java.io.IOException: org.apache.phoenix.exception.PhoenixIOException:
17 matches
Mail list logo