inconsistent commit behavior when using JDBC

2019-05-22 Thread M. Aaron Bossert
I am using Phoenix 5 as shipped with Hortonworks HDP 3.1. I am storing 3+ million file names in a table and then using the table to keep track of which files I have processed using a Storm topology. I have been doing some testing to make sure that everything is working correctly and as part of

Re: Query logging - PHOENIX-2715

2019-04-19 Thread M. Aaron Bossert
Sorry if I have missed something obvious, but I saw that this was implemented (according to JIRA) in 4.14 and 5.0.0. I need to set this up to log each query in that SYSTEM:LOG table, but cannot seem to find the official instructions for how to configure this through LOG4J settings or whatever it

unexpected behavior...MIN vs ORDER BY and LIMIT 1

2019-01-15 Thread M. Aaron Bossert
I have a table (~ 724M rows) with a secondary index on the "TIME" column. When I run a MIN function on the table, the query takes ~290 sec to complete and by selecting on TIME and ORDERing by TIME, the query runs in about 0.04 sec. Here is the explain output for both queries...I totally

Re: client does not have phoenix.schema.isNamespaceMappingEnabled

2018-11-30 Thread M. Aaron Bossert
t; > Realistically, you might only need to provide HBASE_CONF_DIR to the > HADOOP_CLASSPATH env variable, so that your mappers and reducers also > get it on their classpath. The rest of the Java classes would be > automatically localized via `hadoop jar`. > > On 11/29/18 1:27 PM,

Re: client does not have phoenix.schema.isNamespaceMappingEnabled

2018-11-29 Thread M. Aaron Bossert
ib/hbase-protocol.jar:/etc/hbase/ > 3.0.1.0-187/0/ > > Most times, including the output of `hbase mapredcp` is sufficient ala > > HADOOP_CLASSPATH="$(hbase mapredcp)" hadoop jar ... > > On 11/27/18 10:48 AM, M. Aaron Bossert wrote: > > Folks, > > > > I

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread M. Aaron Bossert
I can't be definitive, but I have had a very similar issue in the past. The root cause was the my NTP server had died and a couple of nodes in the cluster got wildly out of sync. Check your HDFS health And if there are under-replicated blocks...this "could" be your issue (though root cause

Re: Create View of Existing HBase table

2017-06-19 Thread M. Aaron Bossert
ue expected in HBase, which would be a future time stamp > for HBase. > > Randy > > On Sat, Jun 17, 2017 at 8:40 PM, M. Aaron Bossert [via Apache Phoenix User > List] <ml+s1124778n3687...@n5.nabble.com> wrote: > >> One potential difference might be resolution.

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
m/tutorials/system-currentTimeMillis.html#unix-timestamp > > For testing purpose, maybe you can insert some cells without explicit > timestamp to confirm whether timestamp is the issue. > > Randy > > On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User > L

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
For testing purpose, maybe you can insert some cells without explicit > timestamp to confirm whether timestamp is the issue. > > Randy > > On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User > List] <ml+s1124778n3684...@n5.nabble.com> wrote: > >

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
the values were persisted. > Here is the time stamp issue I discovered with view. The solution (work > around) is in the last post: > > http://apache-phoenix-user-list.1124778.n5.nabble.com/View-timestamp-on-existing-table-potential-defect-td3475.html > > Randy > > On Fri, Ju

Re: Phoenix and Tableau

2016-01-28 Thread Aaron Bossert
Sorry for butting in, but do you mean that tableau supports JDBC drivers? I have wanted to connect Phoenix to tableau for some time now as well, but have not seen any documentation from tableau to suggest that they now support JDBC drivers. Just references to using a JDBC-ODBC bridge driver,

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread M. Aaron Bossert
s, I know. That timeout was because Phoenix was doing CROSS JOIN which > made progressing with each row very slow. > Even if it could succeed, it would take a long time to complete. > > Thanks, > Maryann > > On Fri, Sep 11, 2015 at 11:58 AM, M. Aaron Bossert <maboss...@gmail.

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread Aaron Bossert
stands, you're trying to construct 250K*270M pairs > before filtering them. That's 67.5 trillion. You will need a quantum computer. > > I think you will be better off restructuring... > > James > >> On 11 Sep 2015 5:34 pm, "M. Aaron Bossert" <maboss...@g

yet another question...perhaps dumb...JOIN with two conditions

2015-09-10 Thread M. Aaron Bossert
I am trying to execute the following query, but get an error...is there another way to achieve the same result by restructuring the query? QUERY: SELECT * FROM NG.AKAMAI_FORCEFIELD AS FORC INNER JOIN NG.IPV4RANGES AS IPV4 ON FORC.SOURCE_IP >= IPV4.IPSTART AND FORC.SOURCE_IP <= IPV4.IPEND;

Re: JOIN issue, getting errors

2015-09-09 Thread Aaron Bossert
? Also, > what versions of HBase / Phoenix are you using? > >> On Tue, Sep 8, 2015 at 12:33 PM, M. Aaron Bossert <maboss...@gmail.com> >> wrote: >> All, >> >> Any help would be greatly appreciated... >> >> I have two tables with the followi

JOIN issue, getting errors

2015-09-08 Thread M. Aaron Bossert
All, Any help would be greatly appreciated... I have two tables with the following structure: CREATE TABLE NG.BARS_Cnc_Details_Hist ( ip varchar(30) not null, last_active date not null, cnc_type varchar(5), cnc_value varchar(50), pull_date date CONSTRAINT cnc_pk PRIMARY