H Leone,
I recommend increasing the timeout parameter (see
http://phoenix.incubator.apache.org/tuning.html) which is controlled
by phoenix.query.timeoutMs. Also check that your hbase.rpc.timeout is set
to the same value as well on your region server.

When you create an index over an existing table, Phoenix needs to traverse
the existing table to create all the index rows. If the table is big, this
can take some time. You want to set your timeouts such that a region's
worth of data can be traversed and the index rows created within that time
period. The values to choose depend on your region size, cluster size,
table size, etc.

Thanks,
James


On Mon, Apr 21, 2014 at 2:37 AM, lizhe.sun <[email protected]> wrote:

>  Hi There
>
>
> I just started to use Phoenix a couple weeks ago. I found it is very
> difficult to create a secondary index for an exsiting Htable which is
> created by the  DDL of phoenix with SQLINE.
>
> What I  have done are the followings steps:
>  1:uploaded the CSV file which is approximate 200GB  to the HDFS
> SUCCESS
>  2:using SQLine created the table which has two column families and 43
> qulifiers  in phoenix  SUCCESS
>  3:then using map reduce to do the bulkload   SUCCESS
>     for example : hadoop jar /tmp/phoenix/phoenix-trxorder-bulkload.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table PAYMENT --input
> /input/Test.del
>
>    checking the result of bulkload :
>
>
>
>  4: the I have tryed  to create an index of this table  several times;
>
>
>
>
> Partial SUCCESS,  it should be 512415555 rows  success;
>
> so,the question is  when the primary Htable very large should I using this
> way (sqlline DDL )to create a secondare index on it ? do we have other
> approaches wich is more effienctly?
>
>
> Many thanks and looking forward to hearing from you!
>
>
>
> Leone
>
>
>

Reply via email to