After update web interface at Master show that every region server now 1.4.7
and no RITS.
Cluster recovered only when we restart all regions servers 4 times...
> On 11 Sep 2018, at 04:08, Josh Elser wrote:
>
> Did you update the HBase jars on all RegionServers?
>
> Make sure that you have all
Did you update the HBase jars on all RegionServers?
Make sure that you have all of the Regions assigned (no RITs). There
could be a pretty simple explanation as to why the index can't be
written to.
On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
Correct me if im wrong.
But looks like if you
Lots of details missing here about how you're trying to submit these
Spark jobs, but let me try to explain how things work now:
Phoenix provides spark(1) and spark2 jars. These JARs provide the
implementation for Spark *on top* of what the phoenix-client.jar. You
want to include both the phoen
Hello folks,
We have a requirement for salting based on partial, rather than full,
rowkeys. My colleague Mike Polcari has identified the requirement and
proposed an approach.
I found an already-open JIRA ticket for the same issue:
https://issues.apache.org/jira/browse/PHOENIX-4757. I can provide
The root cause could not be got from log information lastly. The index
might have been corrupted and it seems the action of aborting server still
continue due to Index handler failures policy.
Yun Zhang
Best regards!
Batyrshin Alexander <0x62...@gm
Hello !
I wonder if there any way how to get working Phoenix 4.13 or 4.14 with
Spark 2.1.0
In production we used Spark SQL dataframe to load from and write data to
Hbase with Apache Phoenix (Spark 1.6 and Phoenix 4.7) and it worked well.
After upgrade , we faced an issues with loading and wr