hi,all:
I try to read data from kafka_2.11-0.10.2.0 , I get error:
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/flume/Context
at
org.apache.phoenix.kafka.consumer.PhoenixConsumer.prepareContext(PhoenixConsumer.java:140)
at
At postOpen the location of the lucene directory to be used for the region
is set using the value of *"h_region.getRegionInfo().getEncodedName();" *so
whenever prePut is called the index of the column is stored in the
directory that was set during postOpen. So basically the lucene operations
are
How do you handle HBase region splits and merges with such architecture?
Thanks,
Sergey
On Wed, Apr 19, 2017 at 9:22 AM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I created a hbase co-processor that stores/deletes text indexes with
> Lucene, the indexes are stored on HDFS (for
I'm guessing that you're using a version of HDP? If you're using those
versions from Apache, please update as they're dreadfully out of date.
What is the DDL of the table you're reading from? Do you have any
secondary indexes on this table (if so, on what columns)? What kind of
query are you
Reid wouldn't have seen the message he did in the original message about
a successful login if that were the case.
Try adding in "-Dsun.security.krb5.debug" to your PHOENIX_OPTS (I think
that is present in that version of Phoenix). It should give you a lot
more debug information, providing
I created a hbase co-processor that stores/deletes text indexes with
Lucene, the indexes are stored on HDFS (for back up, replication, etc.).
The indexes "mirror" the regions so if the index for a column is at
"hdfs://localhost:9000/hbase/region_name" the index is stored at
Can you describe the functionality you're after at a high level in terms of
a use case (rather than an implementation idea/detail) and we can discuss
any options wrt potential new features?
On Wed, Apr 19, 2017 at 8:53 AM Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I'd still need
I'd still need " *HRegion MyVar; ", *because I'd still need the name of the
region where the row of the id passed to the UDF is located and the value
returned my* "getFilesystem()" *of* "**HRegion", *what do you recommend
that I do?
Regards,
Cheyenne O. Forbes
On Tue, Apr 18, 2017 at 6:27 PM,
Hi all,
currently I am struggling with a performance issue in my Rest API. The API
receives loads of requests coming from frontend in parallel, makes SQL
queries using Phoenix JDBC driver to fetch data from HBase. For each
request, the api makes only 1 query to phoenix/hbase.
I find out, that
Hi Reid,
Then the most probable thing is that the provided keytab is not suitable
for the principal:
phoenix/hadoop-offline032.dx.momo.com@MOMO.OFFLINE
/opt/hadoop/etc/hadoop/security/phoenix.keytab
Can you do a
kinit -kt /opt/hadoop/etc/hadoop/security/phoenix.keytab phoenix/hadoop-
Hi rafa,
I followed the guides on site:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/configuring-phoenix-to-run-in-a-secure-cluster.html
, and linked those configuration files under phoenix bin directory.
But problem remains.
Best regards,
---R
Hi Reid,
Take a look at:
https://lists.apache.org/thread.html/5c0bf3199f8864421e8b7b2f5b2aa4c509691cb2fa82b1894f97a5a1@%3Cuser.phoenix.apache.org%3E
(second mail)
I reproduced a very similar problem when not having a correct
core-site.xml in the client machine.
Do you have a copy of the
Version infomation, phoenix: phoenix-4.10.0-HBase-1.2, hbase: hbase-1.2.4
--
View this message in context:
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3420.html
Sent from the Apache Phoenix User List mailing list archive at
Hi group,
I used the following command to execute sqlline.py:
bin/sqlline.py
hadoop-offline034.dx.momo.com,hadoop-offline035.dx.momo.com,hadoop-offline036.dx.momo.com:2181:/hbase:phoenix/hadoop-offline032.dx.momo.com@MOMO.OFFLINE:/opt/hadoop/etc/hadoop/security/phoenix.keytab
from the debug,
14 matches
Mail list logo