The error you see would also be rather helpful.
James Taylor wrote:
Hi Cheyenne,
Are you referring to joins through the query server?
Thanks,
James
On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
mailto:cheyenne.osanu.for...@gmail.com>> wrote:
I was using phoenix 4.4 then I switched to 4.
Hi Xindian,
A couple of initial things that come to mind...
* Make sure that you're using HDP "bits" (jars) everywhere to remove any
possibility that there's an issue between what Hortonworks ships and
what's in Apache.
* Make sure that your Java application/Spark job has the correct
hbase-si
o tried to run command line applications directly from the worker
nodes and it works, But inside the Spark Executor it doesn't...
2016-09-15 13:07 GMT-04:00 Josh Elser mailto:josh.el...@gmail.com>>:
How do you expect JDBC on Spark Kerberos authentication to work? Are
you using
How do you expect JDBC on Spark Kerberos authentication to work? Are you
using the principal+keytab options in the Phoenix JDBC URL or is Spark
itself obtaining a ticket for you (via some "magic")?
Jean-Marc Spaggiari wrote:
Hi,
I tried to build a small app all under Kerberos.
JDBC to Phoeni
phoenix-4.8.0-HBase-1.1-client.jar is the jar which should be used. The
phoenix-4.8.0-HBase-1.1-hive.jar is to be used with the Hive integration.
dalin.qin wrote:
[root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver
Starting search for JAR files from directory .
Looking for
Hi,
The trailing semi-colon on the URL seems odd, but I do not think it
would cause issues in parsing when inspecting the logic in
PhoenixEmbeddedDriver#acceptsURL(String).
Does the Class.forName(..) call succeed? You have Phoenix properly on
the classpath for your mappers?
Dong-iL, Kim wr
Puneeth -- One extra thing to add to Francis' great explanation; the
response message told you what you did wrong:
"missingStatement":true
This is telling you that the server does not have a statement with the
ID 12345 as you provided.
F21 wrote:
Hey,
You mentioned that you sent a PrepareA
Yup, Francis got it right. There are POJOs in Avatica which Jackson
(un)marshals the JSON in-to/out-of and logic which constructs the POJOs
from Protobuf and vice versa.
In some hot-code paths, there are implementations in the server which
can use protobuf objects directly (to avoid extra dese
I was going to say that
https://issues.apache.org/jira/browse/PHOENIX-3223 might be related,
but it looks like the HADOOP_CONF_DIR is already put on the classpath.
Glad to see you goth this working :)
On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote:
> Glad you got it working! :)
>
> Cheers,
> Francis
>
(-cc other lists)
Hi Afshin,
The release notes you referenced are more meant to alert users about any
issues in the new release that you may run into over previous releases.
"Release notes provide details on issues and their fixes which may have
an impact on prior Phoenix behavior"
- Josh
es at runtime.
Josh Elser wrote:
Hi Youngwoo,
The inclusion of hadoop-common is probably the source of most of the
bloat. We really only needed the UserGroupInformation code, but Hadoop
doesn't provide a proper artifact with just that dependency for us to
use downstream.
What dependency
Hi Youngwoo,
The inclusion of hadoop-common is probably the source of most of the
bloat. We really only needed the UserGroupInformation code, but Hadoop
doesn't provide a proper artifact with just that dependency for us to
use downstream.
What dependency issues are you running into? There wa
Hi Ankit,
Assuming you provide some condition such as `WHERE ROWKEY_COLUMN like
"9898989898_@#$%"` in your query, I believe Phoenix will automatically
execute the query via a bounded range scan over that rowKey prefix.
You can verify this is happening by using the `explain ` command.
ankit b
Did you read James' response in PHOENIX-2271? [1]
Restating for you: as a work-around, you could try to use the recent
transaction support which was added via Apache Tephra to prevent
multiple clients from modifying a cell. This would be much less
efficient than the "native" checkAndPut API ca
You can check the dev list for the VOTE thread which contains a link to
the release candidate but it is not an official Apache Phoenix release yet.
Vasanth Bhat wrote:
Thanks a lot Ankit.
where do I download this from? I am looking at
http://mirror.fibergrid.in/apache/phoenix/don't seem
It sounds like whatever query you were running was just causing the
error to happen again locally. Like you said, if you launched a new
instance of sqlline.py, you would have a new JVM and thus a new
ThreadPool (and backing queue).
vishnu rao wrote:
hi
i was using the "sqlline.py" client ..
Looking into this on the HDP side. Please feel free to reach out via HDP
channels instead of Apache channels.
Thanks for letting us know as well.
Josh Mahonin wrote:
Hi Robert,
I recommend following up with HDP on this issue.
The underlying problem is that the 'phoenix-spark-4.4.0.2.4.0.0-16
Can you share the error that your RegionServers report in the log before
they crash? It's hard to give an explanation without knowing the error
you're facing.
Thanks.
kevin wrote:
hi,all
I have a test about hbase run top of alluxio . In my hbase there is
a table a create by phoenix an ha
Hi,
I was just made aware of a neat little .NET driver for Avatica
(specifically, the authors were focused on Phoenix's use of Avatica in
the Phoenix Query Server).
https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview
I'll have to try it out at some point, but would love to
Hi Tongzhou,
Maybe you can try `ALTER INDEX index ON table DISABLE`. And then the
same command with USABLE after you update the index. Are you attempting
to do this incrementally? Like, a bulk load of data then a bulk load of
index data, repeat?
Regarding the TTL, I assume so, but I'm not ce
Negative, sorry :\
I'm not really sure how this all is supposed to work in Groovy. I'm a
bit out of my element.
Brian Jeltema wrote:
Any luck with this?
On Jun 9, 2016, at 10:07 PM, Josh Elser mailto:josh.el...@gmail.com>> wrote:
FWIW, I've also reproduced this with
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79
and Apache Phoenix 4.8.0-SNAPSHOT locally.
Will dig some more.
Brian Jeltema wrote:
Groovy 2.4.3
JDK 1.8
On Jun 8, 2016, at 11:26 AM, Josh Elser mailto:josh.el...@gmail.com>> wrote:
Thanks for the info, B
Koert,
Apache Phoenix goes through a lot of work to provide multiple versions
of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2
presently). The builds for each of these branches are tested against
those specific versions of HBase, so I doubt that there are issues
between Apa
/hdp/current/phoenix-cient/phoenix-client.jar
and run the following groovy script, assuming zookeeper is running on
zknode:
import groovy.sql.Sql
Sql.newInstance("jdbc:phoenix:zknode:/hbase-unsecure",
'foo',
'bar',
"org.apache.phoenix.jdbc.PhoenixDriver")
Looks like you're knocking up against Hadoop (in o.a.h.c.Configuration).
Have you checked search results without Phoenix specifically?
I haven't run into anything like this before, but I'm also not a bit
Groovy aficionado. If you can share your environment (or some sample
project that can exhi
Hi Mariana,
You could try defining an array of whatever type you need.
See https://phoenix.apache.org/array_type.html for more details.
- Josh
Mariana Medeiros wrote:
Hello :)
I have a Java class Example with a String and an ArrayList fields.
I am using Apache phoenix to insert and read data
Hi Naveen,
The Protocol Buffer dependency on 2.5 is very unlikely to change in
Phoenix as that is directly inherited from HBase (as you can imagine,
these need to be kept in sync).
There are efforts, in both HBase and Phoenix, underway to provide
shaded-jars for each project which would allo
cute",
"connectionId": 8,
"statementId": 20,
"sql": "SELECT * FROM us_population",
"maxRowCount": -1
}
And this is the commit command response (if it can give you more
insights)
{
"response": "resultSet",
"connec
Nope, you shouldn't need to do this.
"statements" that you create using the CreateStatementRequest are very
similarly treated to the JDBC Statement interface (they essentially
refer to an instance of a PhoenixStatement inside PQS, actually).
You should be able to create one statement and just
Also, you're using the wrong command.
You want "prepareAndExecute" not "prepareAndExecuteBatch".
Josh Elser wrote:
Thanks, will fix this.
Plamen Paskov wrote:
Ah i found the error. It should be "sqlCommands": instead of
"sqlCommands",
The docume
ulation(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]
}
And this is the response i receive:
Error 500
HTTP ERROR: 500
Problem accessing /. Reason:
com.fasterxml.jackson.core.JsonParseException: Unexpected
character (',' (code 44)): was expecti
#x27;City 1',10)", "UPSERT INTO
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]
}
And this is the response i receive:
Error 500
HTTP ERROR: 500
Problem accessing /. Reason:
com.fasterxml.jackson.core.JsonParseException: Unexp
What version of Phoenix are you using?
Plamen Paskov wrote:
Hey folks,
I'm trying to UPSERT some data via the json api but no luck for now. My
requests looks like:
{
"request": "openConnection",
"connectionId": "6"
}
{
"request": "createStatement",
"connectionId": "6"
}
{
"request": "prepareA
For reference materials: definitely check out
https://calcite.apache.org/avatica/
While JSON is easy to get started with, there are zero guarantees on
compatibility between versions. If you use protobuf, we should be able
to hide all schema drift from you as a client (e.g. applications you
wr
(-cc dev@phoenix)
Deepak,
As the name suggests, that release is targeted for HBase-0.98.x release
lines. Any compatibility of an older release of HBase than 0.98 is
likely circumstantial.
I can't speak on behalf of the HBase community, but I feel relatively
confident in suggesting that it w
Hi Jared,
This is just a bad error message on PQS' part. Sorry about that. IIRC,
it was something obtuse like not finding the server-endpoint for the
JSON message you sent.
If you want to do a POST and use the body, you can just put the bytes
for your JSON blob in there and that should be su
If you invoked a commit on PQS, it should have flushed any cached values
to HBase. The general messages you described in your initial post look
correct at a glance.
If you have an end-to-end example of this that I can play with, I can
help explain what's happening inside of PQS. If you want to
Hi Pierre,
1.1.2.2.4 is not a version of Apache HBase. Might you be needing to
contact a vendor for specific information?
Either way, the phoenix shaded client and server (targeted for HBase
server) are not attached to the Maven build which means that they are
not deployed via Maven as a par
Let's think back to before transactions where added to Phoenix.
With autoCommit=false, updates to HBase will be batched in the Phoenix
driver, eventually flushing on their own or whenever you invoked
commit() on the connection.
With autoCommit=true, updates to HBase are flushed with every exe
Hi Jared,
Sounds like https://issues.apache.org/jira/browse/CALCITE-780
That version of Phoenix (probably) is using Calcite-1.2.0-incubating.
You could ask the vendor to update to a newer version, or use Phoenix
4.7.0 (directly from Apache) which is using Calcite-1.6.0.
Jared Katz wrote:
Th
Also, setting -Dsun.security.krb5.debug=true when you launch your Java
application will give you lots of very helpful information about what is
happening "under the hood".
Sanooj Padmakumar wrote:
Thanks Josh and everyone else .. Shall try this suggestion
On 22 Mar 2016 09:36, &
Correct, James:
Phoenix-4.7.0 uses Calcite-1.6.0. This included lots of goodies includes
commit/rollback support. Phoenix-4.6.0 used Calcite-1.3.0. In general,
if you want to use the QueryServer, I'd strongly recommend trying to go
with Phoenix-4.7.0. You'll inherit *lots* of bugfixes/improvem
Keytab-based logins do not automatically spawn a renewal thread in
Hadoop's UserGroupInformation library, IIRC. HBase's RPC implementation
does try to automatically re-login, but if you are not actively making
RPCs, you may miss the window in which you are allowed to perform a renewal.
Commonl
Yeah, I don't think the inclusion of Python code should be viewed as a
barrier to inclusion (maybe just a hurdle). I've seen other projects
(Ambari, iirc) which have tons of Python code and lots of integration.
The serialization for PQS can be changed via a single configuration
property in hba
I only wired up commit/rollback in Calcite/Avatica in Calcite-1.6.0 [1],
so I Phoenix-4.6 isn't going to have that in the binaries that you can
download (Phoenix-4.6 is using 1.3.0-incubating). This should be
included in the upcoming Phoenix-4.7.0.
Sadly, I'm not sure why autoCommit=true would
Hi Steve,
Sorry for the delayed response.
Putting the "payload" (json or protobuf) into the POST instead of the
header should be the 'recommended' way forward to avoid the limit as you
ran into [1]. I think Phoenix >=4.6 was using Calcite-1.4, but my memory
might be failing me.
Regarding th
Ns G wrote:
Hi All,
I have written a simple class to access phoenix.
I am able to establish connection. But when executing below line i get
the error.
conn = DriverManager.getConnection(dbUrl);
I am facing below exception when accessing phoenix through JDBC from
eclipse.
INFO - Call except
Created https://issues.apache.org/jira/browse/PHOENIX-2539
James Taylor wrote:
Another good contribution would be to add this question to our FAQ.
On Tue, Dec 15, 2015 at 2:20 PM, Samarth Jain mailto:sama...@apache.org>> wrote:
Kannan,
See my response here:
https://mail-archives.
301 - 348 of 348 matches
Mail list logo