Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
Thanks for the pointers Gabriel! Will give it a shot now! On 16 September 2015 at 12:15, Gabriel Reid wrote: > Yes, there is post-processing that goes on within the driver program (i.e. > the command line tool with which you started the import job). > > The MapReduce job actually just creates HF

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gabriel Reid
Yes, there is post-processing that goes on within the driver program (i.e. the command line tool with which you started the import job). The MapReduce job actually just creates HFiles, and then the post-processing simply involves telling HBase to use these HFiles. If your terminal closed while run

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
Sure, attached below the job counter values. I checked the final status of the job and it said succeeded. I could not see the import tool exactly because I ran it overnight and my machine rebooted at some point for some updates - I wonder if there is some post-processing after the MR job which migh

Re: PhoenixMapReduceUtil multiple tables

2015-09-16 Thread Ravi Kiran
Thanks Asher. On Wed, Sep 16, 2015 at 7:10 AM, Asher Devuyst wrote: > Would be great to support both reading from multiple tables as the ticket > states as well as writing out to multiple tables. When writing to HBase > directly this is supported via the MultiTableOutputFormat class. This > co

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gabriel Reid
Can you view (and post) the job counters values from the import job? These should be visible in the job history server. Also, did you see the import tool exit successfully (in the terminal where you started it?) - Gabriel On Wed, Sep 16, 2015 at 6:24 PM, Gaurav Kanade wrote: > Hi guys > > I was

Re: setting up community repo of Phoenix for CDH5?

2015-09-16 Thread Andrew Purtell
@J-M, don't worry about signing the output, it won't be an issue. On Wed, Sep 16, 2015 at 4:46 AM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > @James: I'm working on the parcel building ;) If not me I will try to find > someone to do it. Stay tuned. > @Andrewy: It works for me that w

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
Hi guys I was able to get this to work after using bigger VMs for data nodes; however now the bigger problem I am facing is after my MR job completes successfully I am not seeing any rows loaded in my table (count shows 0 both via phoenix and hbase) Am I missing something simple ? Thanks Gaurav

Re: timeouts for long queries

2015-09-16 Thread James Heather
Can you tell me how to set these client-side properties programmatically? I'm using JDBI, which uses JDBC; I'm building the whole application into an executable jar. It's not clear to me where I would put a hbase-site.xml; but I suspect that it is easier in any case to set the Phoenix properti

RE: Can't add views on HBase tables after upgrade

2015-09-16 Thread Jeffrey Lyons
Thanks a bunch Samarth and James, I will switch our scripts over to use the correct way! Jeff From: James Taylor [mailto:jamestay...@apache.org] Sent: September-15-15 7:22 PM To: user Subject: Re: Can't add views on HBase tables after upgrade Correct, you must use the CREATE VIEW ( ) syntax w

Re: PhoenixMapReduceUtil multiple tables

2015-09-16 Thread Asher Devuyst
Would be great to support both reading from multiple tables as the ticket states as well as writing out to multiple tables. When writing to HBase directly this is supported via the MultiTableOutputFormat class. This could be used as inspiration for Phoenix's implementation. --Asher On Wed, Sep

Re: setting up community repo of Phoenix for CDH5?

2015-09-16 Thread Jean-Marc Spaggiari
@James: I'm working on the parcel building ;) If not me I will try to find someone to do it. Stay tuned. @Andrewy: It works for me that way, cool! I just have a signature issue where it says I have no signature. Will that be an issue? Thanks all, JM 2015-09-16 3:24 GMT-04:00 James Heather : > G

Re: Question about IndexTool

2015-09-16 Thread Gabriel Reid
The call to HFileOutputFormat.configureIncrementalLoad(job, htable) in the IndexTool configures the job to use a Reducer which does the sorting on KeyValues. The KeyValues written to an HFile do indeed need to be sorted, so I would guess that you'll need to do basically the equivalent of a r

Column comments

2015-09-16 Thread Matjaž Trtnik
Hi! Is there a way to create table in Phoenix with column comments. Something like: CREATE TABLE IF NOT EXISTS us_population ( state CHAR(2) NOT NULL , city VARCHAR NOT NULL , population BIGINT ’ CONSTRAINT my_pk PRIMARY KEY (state, city)); If this is possible will I be

Re: Question about IndexTool

2015-09-16 Thread Yiannis Gkoufas
Hi Gabriel, thanks a lot for the reply. I noticed my self afterwards that it does a rollback on every upsert and then extracts the KeyValues. Basically I am trying to replicate the same job but in Spark and I cannot understand where in the existing source code of IndexTool is guaranteed that the r

Re: setting up community repo of Phoenix for CDH5?

2015-09-16 Thread James Heather
Great! Thank you! I'd wondered about parcel building. It did look as though a parcel is just a .tgz, containing the classes and a few bits of meta, so hopefully it's doable. It would be really nice if we could provide a working 4.5 parcel. James On 16 Sep 2015 01:02, "Andrew Purtell" wrote: > I