Victor,

Is it correct to assume that the users define this configuration first and
only then you span up an Ignite cluster for them? If it's so, then you can
translate a user-defined number of records to final data region size. The
formula might be as follows - "data_region_size =
provided_number_of_records * avg_record_size *
avg_ignite_overhead_per_record * K" where:

   - avg_record_size - you said you know it,
   - avg_ignite_overhead_per_record - the overhead is 200 bytes on average
   (refer to the callout in this section
   
https://apacheignite.readme.io/docs/capacity-planning#calculating-memory-usage
   )
   - K - that's extra overhead. You might need to take into account a
   memory space for indexes, backup records:
   
https://apacheignite.readme.io/docs/capacity-planning#memory-capacity-planning-example


This spreadsheet calculator can be helpful with K:
https://apacheignite.readme.io/docs/capacity-planning#example

-
Denis


On Thu, Jun 25, 2020 at 9:44 AM Victor <vicky...@gmail.com> wrote:

> Iiya/ Denis,
>
> I am aware of the Data Region with max size as bytes. I was looking for way
> to do this via number of records.
>
> Anyway, seems there is no way in Ignite to do this today.
>
> I am looking at couple of other options to achieve this, for that same i am
> looking at a way to calculate the storage size per record in cache/table.
> Is
> there an api available to retrieve that?
>
> Thanks,
> Vic
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to