Sorry, I meant to say table names are case sensitive.
On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <maghamraviki...@gmail.com>
wrote:
> Hi Mich,
>Apparently, the tables are case sensitive. Since you have enclosed a
> double quote when creating the table, please pass the sa
Hi Mich,
Apparently, the tables are case sensitive. Since you have enclosed a
double quote when creating the table, please pass the same when running the
bulk load job.
HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
uys.
>>>
>>> On Sat, Feb 13, 2016 at 10:01 AM, James Taylor <jamestay...@apache.org>
>>> wrote:
>>>
>>>> I think the question Anil is asking is "Does Pig have support for
>>>> TinyInt (byte) and SmallInt (short)?" I do
Hi ,
Unfortunately, we don't support dynamic columns within the phoenix-pig
module. Currently, the only two options to PhoenixHBaseStorage are
specifying the table or a set of table columns . We can definitely support
dynamic columns. Please feel free to create a ticket.
Regards
Ravi
On
Hi Anil,
We do a mapping of PTintInt and PSmallInt to Pig DataType.INTEGER .
https://github.com/apache/phoenix/blob/master/phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java#L94
. Can you please share the error you are seeing.
HTH
Ravi.
On Sat, Feb 13, 2016 at 3:16 AM,
Hi Pierre,
Try your luck for building the artifacts from
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it
helps.
Regards
Ravi .
On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim wrote:
> Hi Pierre,
>
> I found this article about how Cloudera’s version
Hi Manya,
We are working with the Sqoop team to have our patch[1] that enables data
imports to Phoenix table directly. In the mean time, you can apply the
patch to Sqoop 1.4.6 source and give it a try.
Please do let us know how it goes .
[1] https://issues.apache.org/jira/browse/SQOOP-2649
Hi Zack,
The limitation of 32 HFiles is due to this configuration property
MAX_FILES_PER_REGION_PER_FAMILY
which defaults to 32 in LoadIncrementalHFiles.
You can give it a try updating your configuration with a larger value and
see if it works.
Hi Rafa,
I will be working on this ticket
https://issues.apache.org/jira/browse/PHOENIX-2584. You can add yourself as
a watcher to the ticket to see the progress.
Regards
Ravi
On Wed, Dec 23, 2015 at 3:21 AM, rafa wrote:
> Hi all !!
>
> Just a quick question. I see in:
>
Hi Dmitry,
James has answered this couple of times in earlier threads. Found this
useful. Hope it helps!
https://groups.google.com/forum/#!topic/phoenix-hbase-user/lL-SVFeFpNg
https://groups.google.com/forum/#!topic/phoenix-hbase-user/U3hCUhRTZV8
Regards
Ravi
On Thu, Nov 5, 2015 at 2:21 PM,
It would be great if we can provide an api and have end users provided
implementation on how to parse each record . This way, we can move away
with only bulk loading csv and have json and other formats of input bulk
loaded onto phoenix tables.
I can take that one up. Would it be something the
Hi Sumit,
The PhoenixInputFormat gets the number of splits based on the region
boundaries . However, if guideposts are configured(
https://phoenix.apache.org/update_statistics.html) you might not see a 1 to
1 mapping. @James please correct me if I am wrong here.
You are right on the salting
Hi ,
Since you have just reset HBase, see if the table name 'SYSTEM.CATALOG'
exists as a znode in zookeeper path /hbase/tables/ . If so, you can do a
rmr from the zookeeper shell.
Hope it helps.
Regards
On Mon, Sep 21, 2015 at 8:48 AM, Konstantinos Kougios <
kostas.koug...@googlemail.com>
s supported via the MultiTableOutputFormat class. This
> could be used as inspiration for Phoenix's implementation.
>
> --Asher
>
> On Wed, Sep 16, 2015 at 1:00 AM, Ns G <nsgns...@gmail.com> wrote:
>
>> Hi Ravi,
>>
>> Raised Phoenix-2266 JIRA for the same.
>>
>> Than
ing this?
>>
>> Thanks,
>> Durga Prasad
>>
>> On Mon, Aug 10, 2015 at 6:18 AM, Ravi Kiran <maghamraviki...@gmail.com>
>> wrote:
>>
>>> Hi Peeranat,
>>>
>>>With the current implementation , there isn't an option to wri
Hi Satish,
This was reported and fixed as part of
https://issues.apache.org/jira/browse/PHOENIX-2181. For a quick turnaround,
you can do this.
STORE c into 'hbase://checks/enterprise_id,business_date' using
org.apache.phoenix.pig.PhoenixHBaseStorage('zkquorum','-batchSize 5000'
);
Hi Satya,
Unless you call NEXT VALUE FOR on a sequence *the first time*, you
wouldn't be getting the CURRENT VALUE . So , instead of getting the
CURRENT VALUE, you can get the NEXT VALUE and then pass to your upsert
query.
Hope this helps
On Sat, Aug 22, 2015 at 7:59 AM, Ns G
Hi Pari,
I wrote a quick test and there indeed seems to be an issue when SALT
BUCKETS are mentioned on the table. Can you please raise a JIRA ticket.
In the mean while, can you try the following to get over the issue.
raw = LOAD 'hbase://*query/SELECT CLIENTID,EMPID,NAME FROM HIRES*' USING
Hi Peeranat,
With the current implementation , there isn't an option to write to
multiple phoenix tables .
Thanks
Ravi
On Fri, Aug 7, 2015 at 12:39 PM, Peeranat Fupongsiripan
peerana...@gmail.com wrote:
Hi
I'm new to Phoenix. I'm wondering whether it's possible to use one
map/reduce
Hi ,
Since you are saying billions of rows, why don't you try out the
MapReduce route to speed up the process. You can take a look at how
IndexTool.java(
Hi Durga,
Can you share the errors you get when giving the LOAD the way i specified
above. Also, can you confirm if the phoenix table is ndm_17.table1 and not
NDM_17.TABLE1 ?
Regards
Ravi
On Thu, Jul 9, 2015 at 10:09 AM, Ns G nsgns...@gmail.com wrote:
Hi Ravi Kiran,
Thanks for your response
libs
that need to be in Flume's classpath along with the zookeeperQuorum to
pointing to the appropriate cluster?
Thanks!
On Thu, Jun 11, 2015 at 12:58 PM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Buntu,
Apparently, the necessary classes related to Flume client are already
part
MR issue
To: Ravi Kiran maghamraviki...@gmail.com
Hi Ravi,
Thanks for taking time. Below is my job setup code. I now used reducer
setup method to read the file.
I am giving only a part of the code due to access restrictions
final String selectQuery = SELECT * FROM Table1
for reading the data from an existing
table and no other process is using HBase so I think it's not my case.
Why don't you like to recreate a new scan if the old one dies?
Best,
Flavio
On Mon, Apr 13, 2015 at 6:35 PM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Flavio,
One good blog
Hi Flavio,
Currently, the default scanner caching value that Phoenix runs with is
1000. You can give it a try to reduce that number by updating the property
*hbase.client.scanner.caching* in your hbase-site.xml. If you are doing a
lot of processing for each record in your Mapper, you might
Hi Flavio,
If you writing a Map Reduce , I would highly recommend using the custom
InputFormat classes written that handle these .
http://phoenix.apache.org/phoenix_mr.html.
Regards
Ravi
On Wed, Apr 1, 2015 at 12:16 AM, Flavio Pompermaier pomperma...@okkam.it
wrote:
Any help here?
On
Hi Matt,
Your understanding is right. You don't need YARN services running as
long as you don't run any Map Reduce jobs(CSV Bulk Loading, Map Reduce ,
phoenix-pig) .
Regards
Ravi
On Mon, Mar 23, 2015 at 7:51 AM, Matthew Johnson matt.john...@algomi.com
wrote:
Hi all,
Currently when I
Hi William,
Phoenix tries to create tables by upper casing them and thus I believe
the table name that would have been created is PERSON and hence it's not
matching Person. Can you please give it a try with giving the table name in
upper case.
Regards
Ravi
On Sun, Mar 15, 2015 at 1:03 AM,
Hi Krishna,
I assume you have already taken a look at the example here
http://phoenix.apache.org/phoenix_mr.html
Is there a need to compute hash byte in the MR job?
Can you please elaborate a bit more on what hash byte is ?
Are keys and values stored in BytesWritable before doing a
Hi Geoffrey,
In the current implementation, we wouldn't be able to as we take the
zookeeper quorum information from the hbase-site.xml in the classpath.
However, we can easily extend the current implementation to support this.
I have raised this
@phoenix.apache.org user@phoenix.apache.org
Subject: RE: Pig vs Bulk Load record count
Hello Ralph,
Try to check if the PIG script doesn’t produce keys that overlap (that
would explain the reduce in number of rows).
Good luck,
Constantin
*From:* Ravi Kiran [mailto:maghamraviki
), but not a showstopper for the 4.3 release.
Would you mind filing a JIRA for it?
Thanks,
James
On Tue, Feb 3, 2015 at 4:31 PM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Ralph,
Glad it is working!!
Regards
Ravi
On Tue, Feb 3, 2015 at 3:29 PM, Perko, Ralph J ralph.pe...@pnnl.gov
wrote
Hi Ralph,
Also, can you please have the schema also attached to the JIRA using
DESCRIBE Z when you don't explicitly specify the data type for the
columns.
Regards
Ravi
On Tue, Feb 3, 2015 at 4:47 PM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Ralph,
Also, can you please have
and there are
no duplicates
Thanks!
Ralph
__
*Ralph Perko*
Pacific Northwest National Laboratory
(509) 375-2272
ralph.pe...@pnnl.gov
From: Ravi Kiran maghamraviki...@gmail.com
Reply-To: user@phoenix.apache.org user@phoenix.apache.org
Date: Monday, February 2
email.
Thanks!
Ralph
--
*From:* Ravi Kiran [maghamraviki...@gmail.com]
*Sent:* Monday, February 02, 2015 5:03 PM
*To:* user@phoenix.apache.org
*Subject:* Re: Pig vs Bulk Load record count
Hi Ralph,
Is it possible to share the CREATE TABLE command as I
count(1) from TEST;
__
*Ralph Perko*
Pacific Northwest National Laboratory
(509) 375-2272
ralph.pe...@pnnl.gov
From: Ravi Kiran maghamraviki...@gmail.com
Reply-To: user@phoenix.apache.org user@phoenix.apache.org
Date: Monday, February 2
Hi Ralph,
That's definitely a cause of worry. Can you please share the UPSERT
query being built by Phoenix . You should see it in the logs with an
entry *Phoenix
Generic Upsert Statement: *..
Also, what do the MapReduce counters say for the job. If possible can you
share the pig script as
Hi ,
You can read more about the Map Reduce integration here
http://phoenix.apache.org/phoenix_mr.html.
A quick simple spark program can be found at
https://gist.github.com/mravi/444afe7f49821819c987.
Regarding Snapshot support, their is a jira
this
flume-ng agent -c conf -f /opt/flume/conf/apache.conf -n agent
-Dflume.root.looger=DEBUG,console
Thanks
Divya N
On Sat, Dec 20, 2014 at 2:14 AM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Divya,
Also, can you confirm if the regex given in the configuration matches
the access
taken to process [0] events
was [3] seconds
14/12/19 12
Thanks
Divya N
On Fri, Dec 19, 2014 at 6:10 AM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi Nagarajan,
Apparently, we do batches of 100 by default for each commit . You can
decrease that number if you would like to.
http
apache logs
https://github.com/apache/phoenix/blob/master/phoenix-flume/src/it/java/org/apache/phoenix/flume/RegexEventSerializerIT.java#testApacheLogRegex
which can help you with the regex
Happy to help!!
Regards
Ravi
On Fri, Dec 19, 2014 at 11:19 AM, Ravi Kiran maghamraviki...@gmail.com
wrote:
Hi
Hi Nagarajan,
Apparently, we do batches of 100 by default for each commit . You can
decrease that number if you would like to.
http://phoenix.apache.org/flume.html
Regards
Ravi
Hi,
As we are working in flume to store apache log into hbase/phoenix
using phoenix-flume jars.Using
it。
At 2014-12-12 15:00:19, Ravi Kiran maghamraviki...@gmail.com wrote:
Hi
Apparently, the support for MR jobs was submitted few days ago as part
of https://issues.apache.org/jira/browse/PHOENIX-1454 . Would you be
willing to give it a stab by building Phoenix artifacts from Git
Hi
Apparently, the support for MR jobs was submitted few days ago as part of
https://issues.apache.org/jira/browse/PHOENIX-1454 . Would you be willing
to give it a stab by building Phoenix artifacts from Git by yourself (
http://phoenix.apache.org/building.html) as the feature is not yet
__
*Ralph Perko*
Pacific Northwest National Laboratory
(509) 375-2272
ralph.pe...@pnnl.gov
javascript:_e(%7B%7D,'cvml','ralph.pe...@pnnl.gov');
From: Ravi Kiran maghamraviki...@gmail.com
javascript:_e(%7B%7D,'cvml','maghamraviki...@gmail.com');
Reply-To: user@phoenix.apache.org
javascript
Hi Ralph.
Can you please try to modify the STORE command in the script to the
following.
STORE D into 'hbase://$table_name/period,deployment,file_id, recnum'
using org.apache.phoenix.pig.PhoenixHBaseStorage('$zookeeper','-batchSize
1000');
Primarily, Phoenix generates the default UPSERT
Hi,
Can you please give it a try by downloading the binaries from
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.1.0-rc1/bin/ and
then copying over the necessary artifacts like
phoenix-4.1.0-server-hadoop2.jar onto the RS classpath on the server.
Regards
Ravi
On Sun, Aug 31, 2014
at 11:57 PM, Ravi Kiran
maghamraviki...@gmail.com wrote:
Hi Russel,
Apparently, Phoenix 4.0.0 leverages few API methods of
HBase 0.98.4 v which aren't present within 0.98.1 that comes
with CDH 5.1
. That's the primary cause for the build issues.
Regards
Ravi
On Mon, Aug 18
...@ds-iq.com
wrote:
It looks like JIRA issue PHOENIX-898 was originally tracking this, but it
looks like this issue has been reverted in 4.1.0 RC 0 and 1. Can anyone
from the Phoenix group confirm this?
From: Ravi Kiran [mailto:maghamraviki...@gmail.com]
Sent: Thursday, August 21
Hi Randy,
This feature is being delivered as part of 4.1.0RC. Right now we are
going through a voting phase for its release. If you wish to try this
feature before the official release, please follow the instructions at
http://phoenix.apache.org/building.html to build the necessary artifacts.
Hi Jody,
Can you please let us know if the HBase table that you would like to
read from has a composite row key. If not, I believe using the standard
TableMapReduceUtil api should do good.
However, it becomes a bit tricky when the row key is a composite one . In
this case, I am afraid you
Hi Russel,
Apparently, Phoenix 4.0.0 leverages few API methods of HBase 0.98.4 v
which aren't present within 0.98.1 that comes with CDH 5.1 . That's the
primary cause for the build issues.
Regards
Ravi
On Mon, Aug 18, 2014 at 5:56 PM, Russell Jurney russell.jur...@gmail.com
wrote:
Hi Thanapool,
Though its easy to do a import of a sql table to hbase tables backed by
phoenix using Sqoop, we do notice issues when the row key of the hbase
table is a set of composite columns. This is because the delimiter used by
Phoenix is different from what Sqoop uses by default.
We
Hi Russel,
When recreating the table, does it complain of a TABLE_ALREADY_EXIST
exception?
If possible, can you please confirm if you see the table 'DEV_HET_MEF'
from the zookeeper client ( zkcli.sh)
a ) hbase zkcli -server host:port
b)ls /hbase/table/
If so, you
54 matches
Mail list logo