)
at sqlline.SqlLine.main(SqlLine.java:424)
When I drop and recreate the view, it works fine. Did anyone face similar
issue?
Thanks,
Siva.
. 2GB of heap space on
master. No activity is going on the cluster when I was running the queries.
Do you recommend any of the parameters to tune memory and GC for Phoenix
and Hbase?
Thanks,
Siva.
On Mon, Jun 1, 2015 at 1:14 PM, Vladimir Rodionov
wrote:
> >> Is IO exception is b
boundnumbercalllog" where "dbname" !='lmguaranteedrate';
+--+
| COUNT(1) |
+--+
| 0 |
+--+
Thanks,
Siva.
On Tue, Jun 2, 2015 at 11:35 AM,
+
| 13480 |
+--+
1 row selected (72.36 seconds)
Did anyone face the similar issue? Is IO exception is because of Phoenix
not able to read from multiple regions since error was resolved after the
compaction? or Any other thoughts?
Thanks,
Siva.
. . . . . . . . . . . . .> and "dbname" ='lmguaranteedrate'
. . . . . . . . . . . . . . . . . . . . .> and rowkey like
'lmguaranteedrate%'
. . . . . . . . . . . . . . . . . . . . .> ) ldll
. . . . . . . . . . . . . . . . . . . . .> left outer join
"inboundnumbercalllog" cl on (ldll.callsid = cl."callsid" and cl."dbname"
='lmguaranteedrate' );
+--+
| COUNT(1) |
+--+
| 426461 |
+--+
1 row selected (27.205 seconds)
Expected result is 426461.
Thanks,
Siva.
Thanks a lot Sun, it resolved the issue.
Thanks,
Siva.
On Sun, May 3, 2015 at 7:20 PM, Fulin Sun wrote:
> Hi, Siva
>
> Generally the problem is thrown under that your spark driver classpath did
> not recognize the relative hbase-protocol.jar
>
> Under this condition,
Any help on TO_DATE function?
Thanks
On Fri, May 1, 2015 at 2:49 AM, Siva wrote:
> Hi,
>
> Phoenix TO_DATE is truncating the time portion from date while converting
> the date. Do I need to change the syntax? As per the documentation syntax
> seems to be correct.
>
> 0:
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Thanks in advance.
Thanks,
Siva.
08:42:31.963 | 2014-04-29 |
+--++
5 rows selected (0.056 seconds)
Thanks,
Siva.
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table P_TEST_2_COLS --input
/user/sbhavanari/p_h_test_2_cols_less.csv --import-columns NAME,LEADID,D
--zookeeper 172.31.45.176:2181:/hbase
Thanks,
Siva.
varchar);
queried the table from phoenix, data is being shown as NULL for Column
address.
0: jdbc:phoenix:172.31.45.176:2181:/hbase> select * from "tab_2_cf";
++++
| PK |name| *address* |
++++
| r1 | asdf | null |
++++
Any help on this?
Thanks,
Siva.
phoenix table
on HDP2.2, but it was showing up in cloudera 5.2. However, index was not
updated when inserted through hbase in both the cases.
Thanks,
Chandu
---
Appreciate for your response.
Thanks,
Siva.
On Thu, Feb 5, 2015 at 1:15 PM, Alicia Sh
We have table contains a NOTE column, this column contains lines of text
separated by new lines. When I export the data from .csv through
bulkloader, Phoenix is failing with error and Hbase terminates the text
till encounters the new line and assumes rest of NOTE as new record.
Is there a way to
as
> those types when you load the data from Hbase.
>
> Alicia
>
> On 2/4/15, 1:23 PM, "Siva" wrote:
>
> >e the data from Hbase after
> >data load when I query it,
>
>
values.
I understand that Hbase stores the data in Byte format, since I created the
table in Phoenix and loaded it through HBase, how does phoenix interprets
the data types. Could someone throw some light on whats happening behind
the scenes?
Thanks,
Siva.
Hi Everyone,
Encountered below error while bulk loading the data. Can you let me know
What is the format for date type?
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: Error on record,
java.text.ParseException: Unparseable date: "2
16 matches
Mail list logo