[jira] [Commented] (PHOENIX-2179) Trace output contains extremely large number of readBlock rows

2015-08-31 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724846#comment-14724846
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2179:
--

Since it's in HBase to disable it we need code change in HBase.

> Trace output contains extremely large number of readBlock rows
> --
>
> Key: PHOENIX-2179
> URL: https://issues.apache.org/jira/browse/PHOENIX-2179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Priority: Minor
>  Labels: Tracing
>
> As an example, trace over a 10M row table using count * query produces over 
> 250K rows for HFileReaderV2.readBlock.
> {code}
> select count(*), description from SYSTEM.TRACING_STATS WHERE TRACE_ID=X group 
> by description;
> +--+--+
> | COUNT(1) |   DESCRIPTION
> |
> +--+--+
> | 3| ClientService.Scan   
> |
> | 253879   | HFileReaderV2.readBlock  
> |
> | 1| Scanner opened on server 
> |
> +--+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2015-08-31 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724611#comment-14724611
 ] 

Nishani  commented on PHOENIX-2208:
---

For the regex query search can we have sql query %like ? Yes it is possible to 
have top headers for each feature. 

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2219) Tracing UI - Add page for E2E tracing

2015-08-31 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724608#comment-14724608
 ] 

Nishani  commented on PHOENIX-2219:
---

Do you need to run the sql on the web page or the phoenix prompt? Do you have a 
code sample where tracing is on/off ?
Are the chart drawn for only that particular query?

> Tracing UI - Add page for E2E tracing
> -
>
> Key: PHOENIX-2219
> URL: https://issues.apache.org/jira/browse/PHOENIX-2219
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>Priority: Minor
>  Labels: tracing
>
> Create a page which under the covers turn on tracing, run a specified sql, 
> get its trace id and then plot the relevant charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2218) Tracing UI - List page should display last x number of top level traces

2015-08-31 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724605#comment-14724605
 ] 

Nishani  commented on PHOENIX-2218:
---

Hi,

Do you mean the parent traces by top level traces? Any sample sql for it?

> Tracing UI - List page should display last x number of top level traces
> ---
>
> Key: PHOENIX-2218
> URL: https://issues.apache.org/jira/browse/PHOENIX-2218
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>Priority: Minor
>  Labels: tracing
>
> Rather than displaying all traces, list page for traces should only display 
> last few traces or in the last x hours. This number can be chosen from drop 
> down list. Also only top level traces and its description should be shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2149) MAX Value of Sequences not honored when closing Connection between calls to NEXT VALUE FOR

2015-08-31 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2149.
-
Resolution: Fixed

> MAX Value of Sequences not honored when closing Connection between calls to 
> NEXT VALUE FOR
> --
>
> Key: PHOENIX-2149
> URL: https://issues.apache.org/jira/browse/PHOENIX-2149
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Jan Fernando
>Assignee: Jan Fernando
>  Labels: SFDC
> Fix For: 4.5.1
>
> Attachments: PHOENIX-2149-v2.patch, PHOENIX-2149.patch
>
>
> There appears to be an issue be related to closing connections between calls 
> to NEXT VALUE FOR that causes the MAX sequence value to be ignored. I have 
> found scenarios when I am allocating sequences near the MAX whereby the MAX 
> is not honored and value greater than the max are returned by NEXT VALUE FOR.
> It appears to be related to the logic to return all sequences on connection 
> close. It looks like if you close the connection between each invocation when 
> you hit the max value instead of the expected error being thrown sequence 
> values continue to be doled out. It looks like for some reason the 
> limit_reached_flag is not being set correctly on the SYSTEM.SEQUENCE table 
> for the sequence in this case.
> I added the test below to SequenceBulkAllocationIT that repros the issue.
> If I either a) remove the nextConnection() call that keeps recycling 
> connections in the test below or b) comment our the code in 
> PhoenixConnection.close() that calls services.removeConnection() the test 
> below starts to pass.
> I wasn't able to repro in Squirrel because I guess it doesn't recycle 
> connections.
> {code}
> @Test
> public void testNextValuesForSequenceClosingConnections() throws 
> Exception {
> final SequenceProperties props =
> new 
> SequenceProperties.Builder().incrementBy(1).startsWith(4990).cacheSize(10).minValue(4990).maxValue(5000)
> .numAllocated(4989).build();
> 
> // Create Sequence
> nextConnection();
> createSequenceWithMinMax(props);
> nextConnection();
> 
> // Try and get next value
> try {
> long val = 0L;
> for (int i = 0; i <= 11; i++) {
> ResultSet rs = 
> conn.createStatement().executeQuery(String.format(SELECT_NEXT_VALUE_SQL, 
> "bulkalloc.alpha"));
> rs.next();
> val = rs.getLong(1);
> nextConnection();
> }
> fail("Expect to fail as this value is greater than seq max " + 
> val);
> } catch (SQLException e) {
> 
> assertEquals(SQLExceptionCode.SEQUENCE_VAL_REACHED_MAX_VALUE.getErrorCode(),
> e.getErrorCode());
> assertTrue(e.getNextException() == null);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-31 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2214.
---
Resolution: Duplicate

> ORDER BY optimization incorrect for queries over views containing WHERE clause
> --
>
> Key: PHOENIX-2214
> URL: https://issues.apache.org/jira/browse/PHOENIX-2214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.1
>Reporter: Eli Levine
> Attachments: 
> 0001-Test-case-to-outline-issue-with-view-ORDER-BY-optimi.patch
>
>
> Phoenix optimizes away ORDER BY clauses if they are the same order as the 
> default PK order. However, this optimization is not done correctly for views 
> (tenant-specific and regular) if the view has been created with a WHERE 
> clause.
> See attached patch for repro, in which the last assertEquals() fails due to 
> the fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2214) ORDER BY optimization incorrect for queries over views containing WHERE clause

2015-08-31 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724309#comment-14724309
 ] 

James Taylor commented on PHOENIX-2214:
---

Actually, this is a duplicate of PHOENIX-2194, so let me close this one.

> ORDER BY optimization incorrect for queries over views containing WHERE clause
> --
>
> Key: PHOENIX-2214
> URL: https://issues.apache.org/jira/browse/PHOENIX-2214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.1
>Reporter: Eli Levine
> Attachments: 
> 0001-Test-case-to-outline-issue-with-view-ORDER-BY-optimi.patch
>
>
> Phoenix optimizes away ORDER BY clauses if they are the same order as the 
> default PK order. However, this optimization is not done correctly for views 
> (tenant-specific and regular) if the view has been created with a WHERE 
> clause.
> See attached patch for repro, in which the last assertEquals() fails due to 
> the fact that ORDER BY is not optimized away, as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2015-08-31 Thread Devaraj Das (JIRA)
Devaraj Das created PHOENIX-2221:


 Summary: Option to make data regions not writable when index 
regions are not available
 Key: PHOENIX-2221
 URL: https://issues.apache.org/jira/browse/PHOENIX-2221
 Project: Phoenix
  Issue Type: Improvement
Reporter: Devaraj Das


In one usecase, it was deemed better to not accept writes when the index 
regions are unavailable for any reason (as opposed to disabling the index and 
the queries doing bigger data-table scans).
The idea is that the index regions are kept consistent with the data regions, 
and when a query runs against the index regions, one can be reasonably sure 
that the query ran with the most recent data in the data regions. When the 
index regions are unavailable, the writes to the data table are rejected. Read 
queries off of the index regions would have deterministic performance (and on 
the other hand if the index is disabled, then the read queries would have to go 
to the data regions until the indexes are rebuilt, and the queries would 
suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2015-08-31 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu reassigned PHOENIX-2221:


Assignee: Alicia Ying Shu

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Alicia Ying Shu
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2220) Tracing UI - Instead of count, sum(endTime-startTime) should be used for group by charts

2015-08-31 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-2220:

Labels: tracing  (was: )

> Tracing UI - Instead of count, sum(endTime-startTime) should be used for 
> group by charts
> 
>
> Key: PHOENIX-2220
> URL: https://issues.apache.org/jira/browse/PHOENIX-2220
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>Priority: Minor
>  Labels: tracing
>
> Currently in tracing UI, group by description pages are showing count 
> aggregate which is not a useful measure. It would be better if sum(endTime - 
> startTime) is shown on these pages and these pages are generated based on 
> specified trace as detailed in 
> https://issues.apache.org/jira/browse/PHOENIX-2208



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2220) Tracing UI - Instead of count, sum(endTime-startTime) should be used for group by charts

2015-08-31 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2220:
---

 Summary: Tracing UI - Instead of count, sum(endTime-startTime) 
should be used for group by charts
 Key: PHOENIX-2220
 URL: https://issues.apache.org/jira/browse/PHOENIX-2220
 Project: Phoenix
  Issue Type: Improvement
Reporter: Mujtaba Chohan
Assignee: Nishani 
Priority: Minor


Currently in tracing UI, group by description pages are showing count aggregate 
which is not a useful measure. It would be better if sum(endTime - startTime) 
is shown on these pages and these pages are generated based on specified trace 
as detailed in https://issues.apache.org/jira/browse/PHOENIX-2208



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2219) Tracing UI - Add page for E2E tracing

2015-08-31 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-2219:

Summary: Tracing UI - Add page for E2E tracing  (was: Tracing - Add page 
for E2E tracing)

> Tracing UI - Add page for E2E tracing
> -
>
> Key: PHOENIX-2219
> URL: https://issues.apache.org/jira/browse/PHOENIX-2219
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>Priority: Minor
>  Labels: tracing
>
> Create a page which under the covers turn on tracing, run a specified sql, 
> get its trace id and then plot the relevant charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2015-08-31 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724093#comment-14724093
 ] 

Mujtaba Chohan commented on PHOENIX-2208:
-

To add to this, it would be better if all pages have a consistent layout and 
search capability (dependency tree, timeline and various other charts) have a 
static top header that displays this input box for regex query search and last 
N queries. 

2. Some of the pages that display group by charts are currently driven off 
entire tracing table, this is not scale-able and the same above search 
functionality should be added to these pages as well.

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2219) Tracing - Add page for E2E tracing

2015-08-31 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2219:
---

 Summary: Tracing - Add page for E2E tracing
 Key: PHOENIX-2219
 URL: https://issues.apache.org/jira/browse/PHOENIX-2219
 Project: Phoenix
  Issue Type: Improvement
Reporter: Mujtaba Chohan
Assignee: Nishani 
Priority: Minor


Create a page which under the covers turn on tracing, run a specified sql, get 
its trace id and then plot the relevant charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2218) Tracing UI - List page should display last x number of top level traces

2015-08-31 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2218:
---

 Summary: Tracing UI - List page should display last x number of 
top level traces
 Key: PHOENIX-2218
 URL: https://issues.apache.org/jira/browse/PHOENIX-2218
 Project: Phoenix
  Issue Type: Improvement
Reporter: Mujtaba Chohan
Assignee: Nishani 
Priority: Minor


Rather than displaying all traces, list page for traces should only display 
last few traces or in the last x hours. This number can be chosen from drop 
down list. Also only top level traces and its description should be shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2179) Trace output contains extremely large number of readBlock rows

2015-08-31 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724057#comment-14724057
 ] 

Mujtaba Chohan commented on PHOENIX-2179:
-

But that's like quarter of a million lines logged for doing a count on a small 
table. Any way to disable logging at this granularity as this would be costly 
for disk space and resource utilization?

> Trace output contains extremely large number of readBlock rows
> --
>
> Key: PHOENIX-2179
> URL: https://issues.apache.org/jira/browse/PHOENIX-2179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Priority: Minor
>  Labels: Tracing
>
> As an example, trace over a 10M row table using count * query produces over 
> 250K rows for HFileReaderV2.readBlock.
> {code}
> select count(*), description from SYSTEM.TRACING_STATS WHERE TRACE_ID=X group 
> by description;
> +--+--+
> | COUNT(1) |   DESCRIPTION
> |
> +--+--+
> | 3| ClientService.Scan   
> |
> | 253879   | HFileReaderV2.readBlock  
> |
> | 1| Scanner opened on server 
> |
> +--+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2216) Support single mapper pass to CSV bulk load table and indexes

2015-08-31 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723962#comment-14723962
 ] 

Gabriel Reid commented on PHOENIX-2216:
---

I looked at this a while back, but never got far enough to actually get it 
going (due to lack of time).

The idea behind HBASE-3727 is what is needed I think, but from my previous look 
at this I'm pretty sure that it wouldn't be too difficult to get this working 
via the "normal" [Hadoop 
MultileOutputs|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.html].
 

Using [Crunch|http://crunch.apache.org] would make this trivial, but I think 
that that's probably way too big of a dependency to pull in to do something 
like this.

I would _love_ to work on this, but I don't feel like I can put in enough time 
to get it done right now -- of course I'd be happy to collaborate (at least in 
terms of reviewing/discussing), but I don't think I can put together a working 
patch for this in the really short-term.

> Support single mapper pass to CSV bulk load table and indexes
> -
>
> Key: PHOENIX-2216
> URL: https://issues.apache.org/jira/browse/PHOENIX-2216
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Instead of running separate MR jobs for CSV bulk load: once for the table and 
> then once for each secondary index, generate both the data table HFiles and 
> the index table(s) HFiles in one mapper phase.
> Not sure if we need HBASE-3727 to be implemented for this or if we can do it 
> with existing HBase APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-953) Support UNNEST for ARRAY

2015-08-31 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723756#comment-14723756
 ] 

Maryann Xue edited comment on PHOENIX-953 at 8/31/15 5:40 PM:
--

Found that Teradata has an interesting extension to UNNEST: the key_expr 
(http://www.info.teradata.com/HTMLPubs/DB_TTU_15_00/index.html#page/SQL_Reference/B035_1145_015K/ARRAY_Functions.063.036.html).
 Guess it's useful for joining the UNNEST result to its original table, to 
achieve the same goal as running query:
SELECT s.student_id, t.score
FROM score_table AS s,
 UNNEST((SELECT scores FROM score_table AS s2 WHERE s2.student_id = 
s.student_id)) AS t(score)

Without this extension, the above query could only be run with nested loop join 
which is very inefficient in Phoenix.


was (Author: maryannxue):
Found that Teradata has an interesting extension to UNNEST: the key_expr. Guess 
it's useful for joining the UNNEST result to its original table, to achieve the 
same goal as running query:
SELECT s.student_id, t.score
FROM score_table AS s,
 UNNEST((SELECT scores FROM score_table AS s2 WHERE s2.student_id = 
s.student_id)) AS t(score)

Without this extension, the above query could only be run with nested loop join 
which is very inefficient in Phoenix.

> Support UNNEST for ARRAY
> 
>
> Key: PHOENIX-953
> URL: https://issues.apache.org/jira/browse/PHOENIX-953
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.6
>
> Attachments: PHOENIX-953-v1.patch, PHOENIX-953-v2.patch, 
> PHOENIX-953-v3.patch, PHOENIX-953-v4.patch
>
>
> The UNNEST built-in function converts an array into a set of rows. This is 
> more than a built-in function, so should be considered an advanced project.
> For an example, see the following Postgres documentation: 
> http://www.postgresql.org/docs/8.4/static/functions-array.html
> http://www.anicehumble.com/2011/07/postgresql-unnest-function-do-many.html
> http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html
> So the UNNEST is a way of converting an array to a flattened "table" which 
> can then be filtered on, ordered, grouped, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1812) Only sync table metadata when necessary

2015-08-31 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1812.
-
Resolution: Fixed

> Only sync table metadata when necessary
> ---
>
> Key: PHOENIX-1812
> URL: https://issues.apache.org/jira/browse/PHOENIX-1812
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-1812-v2.patch, PHOENIX-1812-v3.patch, 
> PHOENIX-1812-v4-WIP.patch, PHOENIX-1812-v5.patch, PHOENIX-1812-v6.patch, 
> PHOENIX-1812.patch, PHOENIX-1812.patch, PHOENIX-1812.patch
>
>
> With transactions, we hold the timestamp at the point when the transaction 
> was opened. We can prevent the MetaDataEndpoint getTable RPC in 
> MetaDataClient.updateCache() to check that the client has the latest table if 
> we've already checked at the current transaction ID timestamp. We can keep 
> track of which tables we've already updated in PhoenixConnection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-953) Support UNNEST for ARRAY

2015-08-31 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723756#comment-14723756
 ] 

Maryann Xue commented on PHOENIX-953:
-

Found that Teradata has an interesting extension to UNNEST: the key_expr. Guess 
it's useful for joining the UNNEST result to its original table, to achieve the 
same goal as running query:
SELECT s.student_id, t.score
FROM score_table AS s,
 UNNEST((SELECT scores FROM score_table AS s2 WHERE s2.student_id = 
s.student_id)) AS t(score)

Without this extension, the above query could only be run with nested loop join 
which is very inefficient in Phoenix.

> Support UNNEST for ARRAY
> 
>
> Key: PHOENIX-953
> URL: https://issues.apache.org/jira/browse/PHOENIX-953
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.6
>
> Attachments: PHOENIX-953-v1.patch, PHOENIX-953-v2.patch, 
> PHOENIX-953-v3.patch, PHOENIX-953-v4.patch
>
>
> The UNNEST built-in function converts an array into a set of rows. This is 
> more than a built-in function, so should be considered an advanced project.
> For an example, see the following Postgres documentation: 
> http://www.postgresql.org/docs/8.4/static/functions-array.html
> http://www.anicehumble.com/2011/07/postgresql-unnest-function-do-many.html
> http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html
> So the UNNEST is a way of converting an array to a flattened "table" which 
> can then be filtered on, ordered, grouped, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2213) ARRAY_LENGTH fails for Decimal type array

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723299#comment-14723299
 ] 

Hudson commented on PHOENIX-2213:
-

SUCCESS: Integrated in Phoenix-master #885 (See 
[https://builds.apache.org/job/Phoenix-master/885/])
PHOENIX-2213 ARRAY_LENGTH fails for Decimal type array (Dumindu Buddhika) 
(ramkrishna: rev f15c4dc3361c1673129042bd3be3bdd6484a4c08)
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayLengthFunction.java


> ARRAY_LENGTH fails for Decimal type array
> -
>
> Key: PHOENIX-2213
> URL: https://issues.apache.org/jira/browse/PHOENIX-2213
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: HortonWorks HDP 2.3_1 using 
> phoenix-4.4.0.2.3.0.0-2557-client.jar running under VirtualBox 4.3.30 r101601 
> on Mac OS X 10.10.5.
>Reporter: Ken Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0, 4.5.0
>
> Attachments: PHOENIX-2213.patch
>
>
> ARRAY_LENGTH generates a type mismatch if the array is DECIMAL type. 
> Sample code:
> !outputformat vertical
> DROP TABLE IF EXISTS mytable;
> CREATE TABLE IF NOT EXISTS mytable
> (id UNSIGNED_INT NOT NULL,
> b.myarray1 INTEGER ARRAY[10],
> c.myarray2 VARCHAR ARRAY[20],
> d.myarray3 DECIMAL(4,2) ARRAY[30],
> CONSTRAINT pk PRIMARY KEY (id));
> UPSERT INTO mytable(id, myarray1, myarray2, myarray3)
> VALUES(1,ARRAY[1],ARRAY['a','b'],ARRAY[1.11,2.22,3.33]);
> select * from mytable;
> SELECT ARRAY_LENGTH(myarray1) from mytable;
> SELECT ARRAY_LENGTH(myarray2) from mytable;
> SELECT ARRAY_LENGTH(myarray3) from mytable;
> Produces the following output run under SQLline:
> 0: jdbc:phoenix:localhost:2181:/hbase> !outputformat vertical
> 0: jdbc:phoenix:localhost:2181:/hbase> DROP TABLE IF EXISTS mytable;
> No rows affected (3.876 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> CREATE TABLE IF NOT EXISTS mytable
> . . . . . . . . . . . . . . . . . . .> (id UNSIGNED_INT NOT NULL,
> . . . . . . . . . . . . . . . . . . .> b.myarray1 INTEGER ARRAY[10],
> . . . . . . . . . . . . . . . . . . .> c.myarray2 VARCHAR ARRAY[20],
> . . . . . . . . . . . . . . . . . . .> d.myarray3 DECIMAL(4,2) ARRAY[30],
> . . . . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (id));
> No rows affected (1.387 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> UPSERT INTO mytable(id, myarray1, 
> myarray2, myarray3)
> . . . . . . . . . . . . . . . . . . .> 
> VALUES(1,ARRAY[1],ARRAY['a','b'],ARRAY[1.11,2.22,3.33]);
> 1 row affected (0.027 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from mytable;
> ID1
> MYARRAY1  [1]
> MYARRAY2  ['a', 'b']
> MYARRAY3  [1.11, 2.22, 3.33]
> 1 row selected (0.017 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray1) from 
> mytable;
> ARRAY_LENGTH(B.MYARRAY1)  1
> 1 row selected (0.015 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray2) from 
> mytable;
> ARRAY_LENGTH(C.MYARRAY2)  2
> 1 row selected (0.015 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray3) from 
> mytable;
> Error: ERROR 203 (22005): Type mismatch. expected: [BINARY ARRAY, VARBINARY] 
> but was: DECIMAL ARRAY at ARRAY_LENGTH argument 1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [BINARY ARRAY, VARBINARY] but was: DECIMAL ARRAY at 
> ARRAY_LENGTH argument 1
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:200)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:325)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:637)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:538)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:87)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:396)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:542)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:493)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:205)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:162)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:364)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:338)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:246)
>   at 
> org.apache.phoen

Re: Thank you

2015-08-31 Thread Ayola Jayamaha
I also plan to do a screen cast on features of the Web App.
1. How to start the web app
2. The new features added to Phoenix

The PRs can be found here[1,2].

[1] https://github.com/apache/phoenix/pull/111
[2] https://github.com/apache/phoenix/pull/112

On Mon, Aug 31, 2015 at 11:03 AM, Ayola Jayamaha 
wrote:

> Hi All,
>
> I successfully completed GSoC 2015 (JIRA PHOENIX 1118). Thanks for all
> those who helped. Let's get it committed.
>
> Hope it will help the users and make their tasks easy.
>
> --
> Best Regards,
> Nishani Jayamaha
> http://ayolajayamaha.blogspot.com/
>
>
>


-- 
Best Regards,
Nishani Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Resolved] (PHOENIX-2213) ARRAY_LENGTH fails for Decimal type array

2015-08-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved PHOENIX-2213.
-
   Resolution: Fixed
Fix Version/s: 4.5.0
   4.4.0

Pushed to 4.x, 4.5 and master branches. Thanks for the patch Dumindu.

> ARRAY_LENGTH fails for Decimal type array
> -
>
> Key: PHOENIX-2213
> URL: https://issues.apache.org/jira/browse/PHOENIX-2213
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: HortonWorks HDP 2.3_1 using 
> phoenix-4.4.0.2.3.0.0-2557-client.jar running under VirtualBox 4.3.30 r101601 
> on Mac OS X 10.10.5.
>Reporter: Ken Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0, 4.5.0
>
> Attachments: PHOENIX-2213.patch
>
>
> ARRAY_LENGTH generates a type mismatch if the array is DECIMAL type. 
> Sample code:
> !outputformat vertical
> DROP TABLE IF EXISTS mytable;
> CREATE TABLE IF NOT EXISTS mytable
> (id UNSIGNED_INT NOT NULL,
> b.myarray1 INTEGER ARRAY[10],
> c.myarray2 VARCHAR ARRAY[20],
> d.myarray3 DECIMAL(4,2) ARRAY[30],
> CONSTRAINT pk PRIMARY KEY (id));
> UPSERT INTO mytable(id, myarray1, myarray2, myarray3)
> VALUES(1,ARRAY[1],ARRAY['a','b'],ARRAY[1.11,2.22,3.33]);
> select * from mytable;
> SELECT ARRAY_LENGTH(myarray1) from mytable;
> SELECT ARRAY_LENGTH(myarray2) from mytable;
> SELECT ARRAY_LENGTH(myarray3) from mytable;
> Produces the following output run under SQLline:
> 0: jdbc:phoenix:localhost:2181:/hbase> !outputformat vertical
> 0: jdbc:phoenix:localhost:2181:/hbase> DROP TABLE IF EXISTS mytable;
> No rows affected (3.876 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> CREATE TABLE IF NOT EXISTS mytable
> . . . . . . . . . . . . . . . . . . .> (id UNSIGNED_INT NOT NULL,
> . . . . . . . . . . . . . . . . . . .> b.myarray1 INTEGER ARRAY[10],
> . . . . . . . . . . . . . . . . . . .> c.myarray2 VARCHAR ARRAY[20],
> . . . . . . . . . . . . . . . . . . .> d.myarray3 DECIMAL(4,2) ARRAY[30],
> . . . . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (id));
> No rows affected (1.387 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> UPSERT INTO mytable(id, myarray1, 
> myarray2, myarray3)
> . . . . . . . . . . . . . . . . . . .> 
> VALUES(1,ARRAY[1],ARRAY['a','b'],ARRAY[1.11,2.22,3.33]);
> 1 row affected (0.027 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from mytable;
> ID1
> MYARRAY1  [1]
> MYARRAY2  ['a', 'b']
> MYARRAY3  [1.11, 2.22, 3.33]
> 1 row selected (0.017 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray1) from 
> mytable;
> ARRAY_LENGTH(B.MYARRAY1)  1
> 1 row selected (0.015 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray2) from 
> mytable;
> ARRAY_LENGTH(C.MYARRAY2)  2
> 1 row selected (0.015 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> SELECT ARRAY_LENGTH(myarray3) from 
> mytable;
> Error: ERROR 203 (22005): Type mismatch. expected: [BINARY ARRAY, VARBINARY] 
> but was: DECIMAL ARRAY at ARRAY_LENGTH argument 1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [BINARY ARRAY, VARBINARY] but was: DECIMAL ARRAY at 
> ARRAY_LENGTH argument 1
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:200)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:325)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:637)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:538)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:87)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:396)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:542)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:493)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:205)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:162)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:364)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:338)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:246)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixState

[jira] [Resolved] (PHOENIX-1977) Always getting No FileSystem for scheme: hdfs when exported HBASE_CONF_PATH

2015-08-31 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-1977.
--
Resolution: Duplicate

Already fixed as part of other issue. Hence closing as duplicate.

> Always getting No FileSystem for scheme: hdfs when exported HBASE_CONF_PATH
> ---
>
> Key: PHOENIX-1977
> URL: https://issues.apache.org/jira/browse/PHOENIX-1977
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Attachments: PHOENIX-1977.patch
>
>
> Always getting this exception when exported HBASE_CONF_PATH with 
> configuration directory. connection creation always excepting hadoop-hdfs jar 
> to be present in that case.I think we can check and load the hdfs-jar from 
> any of the places HADOOP_HOME , HBASE_HOME/lib and current directory.
> For UDFs we need hadoop-common and hadoop-hdfs jars compulsory.
> {code}
> java.io.IOException: No FileSystem for scheme: hdfs
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2579)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:229)
>   at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:623)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:410)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:319)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>   at 
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:171)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1881)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.

[jira] [Updated] (PHOENIX-2022) BaseRegionScanner.next should be abstract

2015-08-31 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2022:
-
Fix Version/s: 4.4.1

> BaseRegionScanner.next should be abstract
> -
>
> Key: PHOENIX-2022
> URL: https://issues.apache.org/jira/browse/PHOENIX-2022
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Gabriel Reid
> Fix For: 4.5.0, 4.4.1
>
> Attachments: PHOENIX-2022.patch
>
>
> As pointed out by Yuhao Bi on a [dev list mail|http://s.apache.org/y6b], 
> BaseRegionScanner implements next as a recursive call to itself. 
> All current subclasses of BaseRegionScanner implement next, but as soon as 
> there is an implementation that doesn't do this, it will end up with a stack 
> overflow.
> Easy fix is to make this method abstract instead of having a default 
> recursive implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1980) CsvBulkLoad cannot load hbase-site.xml from classpath

2015-08-31 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1980:
-
Fix Version/s: 4.4.1

> CsvBulkLoad cannot load hbase-site.xml from classpath
> -
>
> Key: PHOENIX-1980
> URL: https://issues.apache.org/jira/browse/PHOENIX-1980
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Peleato
>Assignee: Nick Dimiduk
> Fix For: 4.5.0, 4.4.1
>
> Attachments: 1980.patch, PHOENIX-1980.addendum.00.patch
>
>
> When I launch a job using a distributed cluster where hbase-site.xml is 
> provided in {{HADOOP_CLASSPATH}} instead of providing --zookeeper, I see 
> errors showing "localhost:2181/hbase" is the target connection, where I would 
> expect ":2181/hbase-unsecure".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)