Hi,

thanks for you reply. My CSV contained roughly 5 million rows. It aborted
after half a million row. It seems to me that some throttling mechanism is
missing in the implementation of the COPY command?

I'll try the other way of getting data in.

/Petter


2013/10/11 Vivek Mishra <[email protected]>

> If not getting any exception.
>
> Reason for "Request did not complete within rpc_timeout."  => socket time
> out
>
> As per http://www.datastax.com/docs/1.1/references/cql/COPY( COPY from
> CSV section)
>
>  COPY FROM is intended for importing small datasets (a few million rows or
> less) into Cassandra. For importing larger datasets, use Cassandra Bulk
> Loader <http://www.datastax.com/docs/1.1/references/bulkloader#bulkloader> or
> the sstable2json / 
> json2sstable<http://www.datastax.com/docs/1.1/references/sstable2json#sstable2json>
> utility.
>
>
> -Vivek
>
>
>
>
> On Fri, Oct 11, 2013 at 3:02 PM, Petter von Dolwitz (Hem) <
> [email protected]> wrote:
>
>> Hi,
>>
>> I'm trying to import CSV data using the COPY ... FROM command. After
>> importing 10% of my 2.5 GB csv file the operation aborts with the message:
>>
>> "Request did not complete within rpc_timeout.
>> Aborting import at record #504631 (line 504632). Previously-inserted
>> values still present."
>>
>> There are no exceptions in the log. I'm using Cassandra 2.0.1 on ubuntu
>> using a two machine setup with 4 cores, 15 GB RAM each.
>>
>> The table design incorporates many secondary indexes (which someone
>> discouraged me to use).
>>
>> Can anybody tells me what is going on?
>>
>> Thanks,
>> Petter
>>
>>
>>
>

Reply via email to