>
> Only just saw this. I have worked with a designed a solution using
> CosmosDb which is currently in production. I actually just released a blog
> post on client performance related to CosmosDb here:
> https://weblogs.asp.net/pglavich/cosmosdb-and-client-performance
>

I just read it all, well done. It does concentrate on bulk insert speed,
which may not be a concern for many products. It's interesting to see which
"walls" you hit along the way. Some notes:

* Was your client on your desktop or up in the cloud in the same region as
the database?

* Are you using a different Guid as the partition key for each document?
You're supposed to be more "coarse" and group large groups of documents
into partitions to improve large query performance. I'm not sure how this
would affect your bulk insert tests.

* (nitpicking) When discussing performance, don't say "significant impact"
or "greater impact", use unambiguous expressions like "improve" or
"degrade".


>  As to your question, I haven’t used it personally, but I believe the
> Table API over CosmosDb supports bulk operations (
> https://docs.microsoft.com/en-us/azure/cosmos-db/table-support ) as it is
> the same as the general Windows Azure storage API which supports bulk.
>

I'm not using Tables as the underlying storage, so I can't use the batch
operations. For Cosmos DB SQL API the only mention of batch or transaction
operations is related to JavaScript procs, but I haven't researched this
yet. If you have to register server-side JavaScript for this purpose, then
I'm quite irritated, not just because of my JS bias, but because it's a
weird language mix.

I also have a (long) blog article on how moving an app suite from SQL
Server to Cosmos DB produced miraculous results

https://gfkeogh.blogspot.com.au/2018/01/collections-database-history.html

*Greg*

Reply via email to