> I use MySql - InnoDB - Memcached for NoSQL
>
> I must insert 10 million rows in a table.
> One solution is to make a loop with an insert for each value
> FOR (i=0; i<10 000 000; i++)
>    ClientMemcached->set("KEY", "ROW KEY");
> END FOR
>
> But I think this is not the best solution in terms of performance.
>
> Any ideas /suggestions ?
>
> Thanks for your help.

you'll have better luck asking mysql folks: this is the list for the
memcached project, not whatever mysql uses for its storage engine
interface (which may be our code, but only partially).

But, off the cuff, you can try using the binary protocol or ASCII with
"NOREPLY" in order to pack the SETs together, so you're not waiting for a
roundtrip per each. That'll depending your client.

However, I have a feeling this will never be the most optimal way to
insert rows into InnoDB. Unless they're doing group commit tricks for you,
you're likely doing a transactional commit per set. So it'd be faster to
use the SQL interface and use batched INSERTS or LOAD DATA INFILE.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to