A less resource intensive approach would be to use approx count distinct -
https://phoenix.apache.org/language/functions.html#approx_count_distinct
You would still need the secondary index though, as James suggested, if you
want it to run fast.
On Fri, Mar 16, 2018 at 10:26 AM Flavio Pompermaier
Hi Noam,
Can you pass on the DDL statements for the table and index and the query
you are executing, please?
Thanks!
On Sun, Oct 1, 2017 at 2:01 AM, Bulvik, Noam wrote:
> Hi
>
>
>
> I have create a table and used the row timestamp mapping functionality.
> The key of the table is + column. I al
Column mapping is enabled by default. See details on various configs and
table properties here - http://phoenix.apache.org/columnencoding.html
On Sun, Jun 11, 2017 at 11:55 PM, Udbhav Agarwal wrote:
> Hi,
>
> I am using apache Phoenix 4.10 on Hbase. I want to use column mapping
> feature. Do I
Cheyene, with Phoenix 4.10, column mapping feature is enabled by default
which means the column names declared in the Phoenix schema are going to be
different from the column qualifiers in hbase. If you would like to
disabled column mapping, set COLUMN_ENCODED_BYTES=NONE property in your ddl.
On M
Yes, Phoenix will take care of mapping the column name to hbase column
qualifier. Before using the column mapping feature (which is on by
default), make sure that the limits, as highlighted on the website, on
number of columns works for you.
On Tue, May 30, 2017 at 7:21 PM Ash N <742...@gmail.com
st case - upsert into TMP_SNACKS(k, c1, "page_title"
varchar) values(2,'a','c');
If you would like the original behavior, then you would need to turn off
column encoding for your table like I mentioned in the previous email. For
more details on this feature - go to
ht
Thanks for reporting the issue, Dave. This has to do with the new column
mapping feature that we rolled out in 4.10. To disable it for your table,
please create your table like this:
create table TMP_SNACKS(k bigint primary key, c1 varchar)
COLUMN_ENCODED_BYTES=0;
I will file a JIRA and get a fix
This is because you are using now() for created. If you used a different
date then with TEST_ROW_TIMESTAMP1, the cell timestamp would be that date
where as with TEST_ROW_TIMESTAMP2 it would be the server side time.
Also, which examples are broken on the page?
On Thu, Mar 9, 2017 at 11:28 AM, Baty
Thanks for reporting this, Jonathan. Would you mind filing a JIRA
preferably with the object tree that you are seeing in the leak. Also, what
version of hbase and phoenix are you using?
On Mon, Dec 5, 2016 at 9:53 AM Jonathan Leech wrote:
> Looks like PHOENIX-2357 introduced a memory leak, at le
Patrick,
Do you have multiple 4.8.1 clients connecting to the cluster at the same
time?
On Thu, Oct 6, 2016 at 8:11 AM, Patrick FICHE
wrote:
> Hi,
>
> I upgraded Phoenix server from 4.7.0 to 4.8.1 on HDP cluster.
>
> Now, when I try to connect to my server using sqlline.py from 4.8.1, I get
> t
Kumar,
Can you try with the 4.8 release?
On Mon, Sep 19, 2016 at 2:54 PM, Kumar Palaniappan <
kpalaniap...@marinsoftware.com> wrote:
>
> Any one had faced this issue?
>
> https://issues.apache.org/jira/browse/PHOENIX-3297
>
> And this one gives no rows
>
> SELECT * FROM TEST.RVC_TEST WHERE (CO
Ryan,
Can you tell us what the explain plan says for the select count(*) query.
- Samarth
On Tue, Aug 9, 2016 at 12:58 PM, Ryan Templeton
wrote:
> I am working on a project that will be consuming sensor data. The “fact”
> table is defined as:
>
> CREATE TABLE historian.data (
> assetid unsign
Best bet is to updgrade your cloudera version to cdh5.7. It supports
phoenix 4.7. See -
http://community.cloudera.com/t5/Cloudera-Labs/ANNOUNCE-Third-installment-of-Cloudera-Labs-packaging-of-Apache/m-p/42351#U42351
On Tuesday, August 2, 2016, anupama agarwal wrote:
> Hi,
>
> We need to insert
+user@phoenix
Larry which version of HBase and Phoenix are you using? Starting from 4.7
Phoenix takes care of automatically renewing scanner leases which should
such timeouts. To take advantage of that feature, you would need to use an
HBase version to a version as recent as 0.98.17 if you are usin
Please look at this tuning guide: https://phoenix.apache.org/tuning.html
You probably would want to adjust these client side properties to deal with
your workload: phoenix.query.threadPoolSize and phoenix.query.queueSize.
On Wed, Jun 22, 2016 at 9:34 AM, 金砖 wrote:
> 16 regionservers, 1500+ reg
IL"
> + " FROM user.SESSION_EXPIRATION "
> + " WHERE NEXT_CHECK <= CURRENT_TIME()"
> + " LIMIT " + batchSize
> + " ) AS TSE"
> + " LEFT OUTER JOIN user.SESSION TS1"
> + " ON TS1.
;= CURRENT_TIME()"
> + " LIMIT " + batchSize
> + " ) AS TSE"
> + " LEFT OUTER JOIN user.SESSION TS1"
> + " ON TS1.CLIENT_ID = TSE.CLIENT_ID"
> + " AND TS1.BRAND_ID = TSE.BRAND_ID"
>
gt; never seem to be cleaned up after each query. Is there any work-around?
>
>
> ------
> *From:* Samarth Jain
> *To:* "user@phoenix.apache.org"
> *Sent:* Friday, 15 April 2016, 17:00
> *Subject:* Re: Getting swamped with Phoenix *.tmp files o
Arun,
Older phoenix views, created pre-4.6, shouldn't have the ROW_TIMESTAMP
column. Was the upgrade done correctly i.e. the server jar upgraded before
the client jar? Is it possible to get the complete stack trace? Would be
great if you could come up with a test case here to understand better whe
server/jboss-datasource/using-try-with-resources-to-close-database-connections
>> ,
>> https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html).
>>
>>
>> --
>> *From:* Samarth Jain
>> *Sent:* Friday,
What version of phoenix are you using? Is the application properly closing
statements and result sets?
On Friday, April 15, 2016, wrote:
> I am running into an issue where a huge number temporary files are being
> created in my C:\Users\myuser\AppData\Local\Temp folder, they are around
> 20MB bi
Srinivas,
Are you trying to create a phoenix view over an existing HBase table?
On Wed, Apr 13, 2016 at 11:47 AM, Pindi, Srinivas <
srinivas.pi...@epsilon.com> wrote:
> *Problem* *Statement*:
>
> While we are trying to create a phoenix view and we are getting the
> following exception.
>
>
>
>
Saurabh, another option for you would be to upgrade your phoenix to our
just released 4.7 version. It is possible that you might be hitting a bug
that has been fixed now. Worth a try.
On Fri, Mar 11, 2016 at 4:07 PM, Sergey Soldatov
wrote:
> Hi Saurabh,
> It seems that your SYSTEM.CATALOG got co
Hi Simon,
phoenix.query.timeoutMs is a client side phoenix property. You can set it
in client side hbase-site.xml for a global setting or can programmatically
set it on per jdbc statement via stmt.setQueryTimeout(int seconds).
There are a couple other hbase level timeouts that are in play:
hbase
Also, are you using the open source version or a vendor supplied distro?
On Fri, Mar 4, 2016 at 10:44 AM, Samarth Jain wrote:
> Rafit,
>
> Changing TTL the way you are doing it should work. Do you have any
> concurrent requests going on that are issuing some kind of ALTER TABLE
Rafit,
Changing TTL the way you are doing it should work. Do you have any
concurrent requests going on that are issuing some kind of ALTER TABLE
statements? Also, would you mind posting the DDL statement for your table?
- Samarth
On Fri, Mar 4, 2016 at 9:20 AM, Rafit Izhak-Ratzin
wrote:
> Hi a
This likely has to do with hbase scanners running into lease expiration.
Try overriding the value of hbase.client.scanner.timeout.period in the
server side hbase-site.xml to a large value.
We have a feature coming out in Phoenix 4.7 (soon to be released) that will
take care of automatically renewi
Kannan,
See my response here:
https://mail-archives.apache.org/mod_mbox/phoenix-user/201509.mbox/%3CCAMfSBK+WKzd5EscXLJcn9nVpDYd66dH=nL=devdc9n_skww...@mail.gmail.com%3E
There is a JIRA in place https://issues.apache.org/jira/browse/PHOENIX-2388
to help pooling of phoenix connections. Would be a
Hi Roc,
FWIW, looking at your schema, it doesn't look like you are using the
ROW_TIMESTAMP feature. The constraint part of your DDL needs to be changed
like this:
CONSTRAINT my_pk PRIMARY KEY (
server_timestamp ROW_TIMESTAMP,
app_id, client_ip,
cluster_id,host_id,api
)
For the issue of getting
io to make it faster?
>
>
>
> Right now, I’m only averaging about 5 queries/second, even though I’m
> querying by the primary key.
>
>
>
> Before I upgraded, I was getting a lot closer to 100.
>
>
>
> Thanks!
>
>
>
>
>
>
>
> *From:* Samarth Jain [
Zack,
These stats are collected continuously and at the global client level. So
collecting them only when the query takes more than 1 second won't work. A
better alternative for you would be to report stats at a request level. You
could then conditionally report the metrics for queries that exceed
Zack,
What version of HBase are you running? And which version of Phoenix
(specifically 4.6-0.98 version or 4.6-1.x version)? FWIW, I don't see the
MetaRegionTracker.java file in HBase branches 1.x and master. Maybe you
don't have the right hbase-client jar in place?
- Samarth
On Wed, Dec 9, 201
Pierre,
Thanks for reporting this. Do you mind filing a JIRA? Also, as a
workaround, can you check if changing the data type from UNSIGNED_LONG to
BIGINT resolves the issue?
-Samarth
On Friday, December 4, 2015, pierre lacave wrote:
>
> Hi,
>
> I am trying to use the ROW_TIMESTAMP mapping feat
Hi Zack,
One simple way to expose the number of open phoenix connections would be
via global client metrics that Phoenix exposes at the client JVM level. I
have filed https://issues.apache.org/jira/browse/PHOENIX-2485.
The client side metrics capability of Phoenix needs to be documented. I
have f
Josh,
One step worth trying would be is to register the PhoenixDriver instance
and see if that helps. Something like this:
DriverManager.registerDriver(PhoenixDriver.INSTANCE)
Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181”)
- Samarth
On Wed, Dec 2, 2015 at 3:41 PM,
t;
>
>
> On Sat, Nov 7, 2015 at 6:53 PM, James Taylor
> wrote:
>
>> If you have time-series data for which you'd like to improve query
>> performance, take a look at this[1] blog written by Samarth Jain on a new
>> feature in our 4.6 release:
>>
>> https://blogs.apache.org/phoenix/entry/new_optimization_for_time_series
>>
>> Enjoy!
>>
>> James
>>
>
>
The Apache Phoenix team is pleased to announce the immediate availability
of the 4.6 release with support for HBase 0.98/1.0/1.1.
Some of the highlights of this release include:
Support for surfacing HBase native timestamp [1]
Support for correlate variable [2]
Alpha version of a web-app for visu
The Apache Phoenix team is pleased to announce the immediate availability
of the 4.6 release with support for HBase 0.98/1.0/1.1.
Some of the highlights of this release include:
Support for surfacing HBase native timestamp [1]
Support for correlate variable [2]
Alpha version of a web-app for visu
Alok,
Please answer the below questions to help us figure out what might be going
on:
1) How many region servers are on the cluster?
2) What is the value configured for hbase.regionserver.handler.count?
3) What kind of queries is your test executing - point look up / range /
aggregate/ full tab
Thanks for reporting back, Mark. I have checked in the updated patch that
should fix the error you are running into. After the fix (
https://github.com/apache/phoenix/commit/416c860f7d9f3490d46169fa74656994b4fe27a8),
I was able to successfully upgrade from 4.4 to 4.6 version of Phoenix.
On Fri, Oc
er has to be able to hold multiple salt
> buckets. Is that correct?
> 3. Where does Phoenix maintain the mapping of salt buckets to region
> server given that the two are orthogonal to each other?
>
> Best regards,
> Sumit
>
> --
> *From:* Sa
- Default value of phoenix.query.rowKeyOrderSaltedTable is true and that
ensure that LIMIT clause returns data in rowkey order
This is no longer the case starting Phoenix 4.4. You need to provide an
explicit ORDER BY on row key columns if you need the rows to be returned in
row key order.
On Wed,
Sergea, any chance you have other queries concurrently executing on the
client? What version of Phoenix you are on?
On Tuesday, October 6, 2015, Serega Sheypak
wrote:
> Hi, found smth similar here:
>
> http://mail-archives.apache.org/mod_mbox/phoenix-user/201501.mbox/%3CCAAF1Jdg-E4=54e5dC3WazL=m
To add to what Jesse said, you can override the default scanner fetch size
programmatically via Phoenix by calling statement.setFetchSize(int).
On Tuesday, October 6, 2015, Jesse Yates wrote:
> So HBase (and by extension, Phoenix) does not do true "streaming" of rows
> - rows are copied into memo
Hi Konstantinos,
Can you tell us what versions of Phoenix and HBase are you using?
- Samarth
On Wed, Sep 30, 2015 at 1:46 PM, anil gupta wrote:
> As per the stack trace, that looks like a bug to me.
>
> On Wed, Sep 30, 2015 at 7:27 AM, Konstantinos Kougios <
> kostas.koug...@googlemail.com>
ported, or point me to some docs that reiterate
> that, it would help my case to refactor all the scripts to fit the
> supported format.
>
>
>
> Thanks,
>
> Jeff
>
>
>
> *From:* Samarth Jain [mailto:sama...@apache.org]
> *Sent:* September-14-15 7:32 PM
> *To:* user@ph
Hi Sumit,
Phoenix doesn't cache query plans as of yet. Once we move over to Calcite
parser and optimizer (which is a work in progress), we will hopefully start
doing that which is when your suggested approach of using PreparedStatement
with bind params would be beneficial.
- Samarth
On Mon, Sep
MO. Thanks in advance for offering to look into it, Samarth.
>
> On Sat, Sep 12, 2015 at 11:22 AM, Samarth Jain wrote:
>
>> Jeffrey,
>>
>> I will look into this and get back to you.
>>
>> - Samarth
>>
>> On Thu, Sep 10, 2015 at 8:44 AM, Jeffr
Jeffrey,
I will look into this and get back to you.
- Samarth
On Thu, Sep 10, 2015 at 8:44 AM, Jeffrey Lyons
wrote:
> Hey all,
>
>
>
> I have recently tried upgrading my Phoenix version from 4.4-HBase-0.98 to
> build 835 on 4.x-HBase-0.98 to get some of the new changes. After the
> upgrade it
me (therefore, a lot of connections) ??
> Also, can you expand a little bit more on the implications of having a
> pooling mechanism for Phoenix connections?
> Thanks in advance!
> -Jaime
>
> On Thu, Sep 3, 2015 at 3:35 PM, Samarth Jain
> wrote:
>
>> Yes. PhoenixConnectio
Zack,
The configs that you overrode do not apply when establishing connection to
HBase via phoenix.
You might want to muck around with hbase.client.retries.number and
zookeeper.recovery.retry
to see if you can get a faster response if HBase is down. I am not an
expert in that area though. Someone
n, right?
>
> 2015-09-03 21:26 GMT+02:00 Samarth Jain :
>
>> Your pattern is correct.
>>
>> Phoenix doesn't cache connections. You shouldn't pool them and you
>> shouldn't share them with multiple threads.
>>
>> For batching upserts, you
Your pattern is correct.
Phoenix doesn't cache connections. You shouldn't pool them and you
shouldn't share them with multiple threads.
For batching upserts, you could do something like this:
You can do this via phoenix by doing something like this:
try (Connection conn = DriverManager.getConne
Ralph,
Couple of questions.
Do you have phoenix stats enabled?
Can you send us a stacktrace of RegionTooBusy exception? Looking at HBase
code it is thrown in a few places. Would be good to check where the
resource crunch is occurring at.
On Tue, Sep 1, 2015 at 2:26 PM, Perko, Ralph J wrote:
, 2015 at 6:02 AM, Sunil B wrote:
> Hi Samarth,
>
> The patch definitely solves the issue. The query "select * from
> table" retrieves all the records. Thanks for the patch.
>
> Thanks,
> Sunil
>
> On Tue, Aug 25, 2015 at 1:21 PM, Samarth Jain
> wrote:
ng for the
> past 5 hours now. Will update the thread with success or failure.
> Code Analysis: ScanPlan.newIterator function uses SerialIterators
> instead of ParallelIterators if there is an "order by" in the query.
>
> Thanks,
> Sunil
>
> On Mon, Aug 24, 2015 at
Sunil,
Can you tells us a little bit more about the table -
1) How many regions are there?
2) Do you have phoenix stats enabled?
http://phoenix.apache.org/update_statistics.html
3) Is the table salted?
4) Do you have any overrides for scanner caching (
hbase.client.scanner.caching) or result s
Hi Sun,
There is no custom HBase filter that phoenix uses to scan specific time
range rows.
Having said that, I am currently working on
https://issues.apache.org/jira/browse/PHOENIX-914 that is going to provide
the capability of having a column directly map to HBase cell level
timestamp. By speci
ter
>> batching)
>>
>> On Wed, Aug 19, 2015 at 7:11 PM, Samarth Jain
>> wrote:
>>
>>> You can do this via phoenix by doing something like this:
>>>
>>> try (Connection conn = DriverManager.getConnection(url)) {
>>> conn.setAutoCommit(f
Yiannis,
Can you please provide a reproducible test case (schema, minimum data to
reproduce the error) along with the phoenix and hbase versions so we can
take a look at it further.
Thanks,
Samarh
On Thu, Aug 20, 2015 at 2:09 PM, Yiannis Gkoufas
wrote:
> Hi there,
>
> I am getting an error whi
You can do this via phoenix by doing something like this:
try (Connection conn = DriverManager.getConnection(url)) {
conn.setAutoCommit(false);
int batchSize = 0;
int commitSize = 1000; // number of rows you want to commit per batch.
Change this value according to your needs.
while (there are reco
eta.prepareAndExecute(RemoteMeta.java:157)
> at
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:474)
> at
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:108)
> ... 7 more
>
>
> The versi
Changing the email group to user@phoenix.apache.org. Please don't use
phoenix-hbase-u...@googlegroups.com as that group is deprecated.
Can you try upgrading your HBase version? I see that the way HBase
configuration is being loaded has been changed in the later releases. I am
not seeing the issue
Hi Serega,
Do you know if auto-commit is on for the connection returned by
getJdbcFacade().createConnection().
If not, you need to call connection.commit() after executeUpdate()
-Samarth
On Tuesday, June 23, 2015, Serega Sheypak wrote:
> Hi, I'm testing dummy code:
>
> int result = getJdbcFaca
Prasanth,
To help us answer you better please let us know your table schema. Also
what does EXPLAIN select * from limit 1000; tell you?
-Samarth
On Friday, May 15, 2015, Chagarlamudi, Prasanth <
prasanth.chagarlam...@epsilon.com> wrote:
> Hello,
> I would appreciate if someone could help with
Looking at the exception java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36,
exceptions:
Thu Apr 09 16:49:33 CEST 2015, null,
java.net.SocketTimeoutException: callTimeout=6, callDuration=62366
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.3.1 release. Highlights of the release being:
- Global client side resource metrics
- SQL command to turn Phoenix tracing ON/OFF
- SQL command to allow setting tracing sampling frequency
- Capability to pass guide p
I think that page needs to be updated. Sorry about that, Ralph. We ran into
problems with HBase 0.98.4 and local indexes where a similar (but not the
same) error was thrown:
Coprocessor.CoprocessorHost: the coprocessor …LocalIndexSplitter threw an
exception
NoSuchMethodError hbase.regionserver.Reg
If you are using sqlline.py, then by default autocommit should be on. To
confirm, can you run !autocommit and see what the output is?
On Mon, Mar 30, 2015 at 9:05 PM, 梁鹏程 wrote:
> hi,
> phoenix.connection.autoCommit in the hbase-site.xml is server side or
> client side?
> thanks.
>
> Regards,
Hi Matt,
Any chance your hint is going to the next line when using squirrel? Just to
be sure, make sure you do this:
Navigate to File --> New Session Properties --> Tab SQL and uncheck the
"Remove multi line comment (/.../) from SQL before sending it to database"
so that hints you include in quer
You also want to make sure that you are using compatible versions of client
and server jars:
phoenix-core version is 4.3.0 and phoenix-server.jar version is 4.2.3 are
*NOT* compatible.
The server side jar version should always be the same or newer (in version)
than the client side jar. In general
Hi Naga,
Can you try
create table mt3 (
tenant_id varchar NOT NULL,
tenant_name varchar
constraint mt_pk primary key (tenant_id, tenant_name)
) multi_tenant=true;
- Samarth
On Wednesday, March 11, 2015, Naga Vijayapuram
wrote:
> I am on HDP 2.2 ; it uses Phoenix 4.2 ; Is this fixed
You would likely need to modify the source code if you want 1.6. We use
try-with-resource construct at a few places now and that is 1.7+ only.
On Sun, Mar 1, 2015 at 4:43 PM, James Taylor wrote:
> Hi Noam,
> Java 1.6 was end of life more than a year ago, so Phoenix binaries no
> longer support i
+1 to what Ralph said.
FWIW, starting with 4.3 (soon to be out) we allow setting HBase properties
like TTL through ALTER TABLE. However, you can't have different TTL for
different column families.
On Wednesday, February 11, 2015, Perko, Ralph J
wrote:
> That is a great point. Setting the TTL
Hey Sun,
I believe you are running into an HTrace bug. See this:
https://issues.apache.org/jira/browse/HTRACE-92
With the tracing on, we end up publishing a lot many metrics record than we
should. These records find there way to the tracing table which ends up
causing an infinite loop which ends
Vijay,
Is there a reason why you are doing PhoenixResultSet.string()? Is it for
logging purposes?
Regarding your question regarding increase in object creation time, that
doesn't seem like it is phoenix related. Are you seeing an increase in time
for resultset.next() or are you seeing an increase
There is only one HConnection per cluster. Every phoenix connection to a
cluster shares the same underlying HConnection. See
ConnectionQueryServicesImpl#init() where we make sure that there is only
one HConnection to the cluster. The HConnection is established when it's
the first time that the phoe
The value of timestamp provided by CURRENT_SCN_ATTRIBUTE has to be greater
than the table timestamp. So it really is any arbitrary value >= table
create timestamp. Providing timestamps on connections helps us with
executing point in time or snapshot queries. In other words, it's a way of
surfacing
Is there a reason why you are using CURRENT_SCN_ATTRIBUTE while you are
getting a phoenix connection? Is it because you want to query data at a
point of time? If yes, you probably want to check that the create time
stamp of the table MYTEST1 <= 141759720L. If you don't want any
snapshot like qu
Hi Vijay,
One of the closing parentheses is wrongly placed.
Try:
create *view* "events" (
"cid" UNSIGNED_LONG,
"timestamp" UNSIGNED_LONG,
"category" CHAR(128),
D."pc" VARCHAR,
D."un" VARCHAR,
D."ug" VARCHAR
CONSTRAINT pk PR
Supporting limit/offset like queries is inherently not efficient for HBase.
Alternatively you could use Phoenix's support for row value constructors to
page through query results. See http://phoenix.apache.org/paged.html for
details.
On Sunday, November 23, 2014, su...@certusnet.com.cn
wrote:
>
10 lac is 1 million.
Siddharth, please let us know the schema and the query you are executing
too. Thanks!
On Wednesday, November 12, 2014, Vladimir Rodionov
wrote:
> What does RS log file say, Siddharth?
>
> Btw, what does 'lac' stand for? In '10 lac'?
>
> -Vladimir
>
> On Wed, Nov 12, 2014 at
es bytes and makes an int from
>> the
>> > bytes...
>> >
>> >
>> https://github.com/apache/phoenix/blob/f99e5d8d609d326fb3571255cd8f47961b1c6860/phoenix-hadoop-compat/src/main/java/org/apache/phoenix/trace/TracingCompat.java#L56
>> >
>> >
>>
Hi Russell,
I am not a Scala guy, but do you know if calling
classOf[com.salesforce.phoenix.jdbc.PhoenixDriver]
ends up loading the java class and hence executing the static block? If it
doesn't you might want to try DriverManager.registerDriver(
com.salesforce.phoenix.jdbc.PhoenixDriver) and then
Dan,
Can you tell me how you are running your tests? Do you have the test class
annotated with the right category annotation - @Category(
HBaseManagedTimeTest.class). Also, can you send over your test class to see
what might be causing problems?
Thanks,
Samarth
On Thu, Aug 28, 2014 at 10:34 AM,
85 matches
Mail list logo