Re:Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-04-01 Thread fvyaba
Thanks D.






At 2018-04-01 23:07:25, "Dmitriy Setrakyan"  wrote:

Hi Fvyaba,


In order to avoid memory overhead per table, you should create all tables as 
part of the same cache group:
https://apacheignite.readme.io/docs/cache-groups



D.


On Mon, Mar 26, 2018 at 7:26 AM, aealexsandrov  wrote:
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 100; i++) {
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));

System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:








 //HERE






2)Use persistence (or swaping space):










//THIS ONE





Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:
















2)Run next:

public class example {
public static void main(String[] args) throws IgniteException {
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
ignite.cluster().active(true);

CacheConfiguration defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");


defaultCacheCfg.setDataRegionName("Default_Region");

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 1000; i++) {
//remove old table cache just in case
cache.query(new SqlFieldsQuery(String.format(
"DROP TABLE TBL_%s", i)));
//create new table
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

ignite.cluster().active(false);
}
}
}










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-04-01 Thread Dmitriy Setrakyan
Hi Fvyaba,

In order to avoid memory overhead per table, you should create all tables
as part of the same cache group:
https://apacheignite.readme.io/docs/cache-groups

D.

On Mon, Mar 26, 2018 at 7:26 AM, aealexsandrov 
wrote:

> Hi Fvyaba,
>
> I investigated your example. In your code you are going to create new cache
> every time when you are going to create new table. Every new cache will
> have
> some memory overhead. Next code can help you to get the average allocated
> memory:
>
> try (IgniteCache cache =
> ignite.getOrCreateCache(defaultCacheCfg)) {
> for(int i = 1; i < 100; i++) {
> cache.query(new SqlFieldsQuery(String.format(
> "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
> KEY(id))", i)));
> System.out.println("Count " + i + "
> -");
> for (DataRegionMetrics metrics :
> ignite.dataRegionMetrics()) {
> System.out.println(">>> Memory Region Name: " +
> metrics.getName());
> System.out.println(">>> Allocation Rate: " +
> metrics.getAllocationRate());
> System.out.println(">>> Allocated Size Full: " +
> metrics.getTotalAllocatedSize());
> System.out.println(">>> Allocated Size avg: " +
> metrics.getTotalAllocatedSize() / i);
> System.out.println(">>> Physical Memory Size: " +
> metrics.getPhysicalMemorySize());
> }
> }
> }
>
> On my machine with default settings I got next:
>
> >>> Memory Region Name: Default_Region
> >>> Allocation Rate: 3419.9666
> >>> Allocated Size Full: 840491008
> >>> Allocated Size avg: 8489808
> >>> Physical Memory Size: 840491008
>
> So it's about 8mb per cache (so if you will have 3.2 GB then you can create
> about 400 caches). I am not sure is it ok but you can do next to avoid
> org.apache.ignite.IgniteCheckedException: Out of memory in data region:
>
> 1)Increase the max value of available off-heap memory:
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
>  //HERE
> 
> 
> 
> 
> 
>
> 2)Use persistence (or swaping space):
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> //THIS ONE
> 
> 
> 
> 
>
> Read more about it you can here:
>
> https://apacheignite.readme.io/docs/distributed-persistent-store
> https://apacheignite.readme.io/v1.0/docs/off-heap-memory
>
> Please try to test next code:
>
> 1) add this to your config:
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
>
> 2)Run next:
>
> public class example {
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite =
> Ignition.start("examples/config/example-ignite.xml")) {
> ignite.cluster().active(true);
>
> CacheConfiguration defaultCacheCfg = new
> CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");
>
> defaultCacheCfg.setDataRegionName("Default_Region");
>
> try (IgniteCache cache =
> ignite.getOrCreateCache(defaultCacheCfg)) {
> for(int i = 1; i < 1000; i++) {
> //remove old table cache just in case
> cache.query(new SqlFieldsQuery(String.format(
> "DROP TABLE TBL_%s", i)));
> //create new table
> cache.query(new SqlFieldsQuery(String.format(
> "CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
> KEY(id))", i)));
> System.out.println("Count " + i + "
> -");
> for (DataRegionMetrics metrics :
> ignite.dataRegionMetrics()) {
> System.out.println(">>> Memory Region Name: " +
> metrics.getName());
> System.out.println(">>> Allocation Rate: " +
> metrics.getAllocationRate());
> System.out.println(">>> Allocated Size Full: " +
> metrics.getTotalAlloca

Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-03-26 Thread aealexsandrov
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 100; i++) {
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:








 //HERE






2)Use persistence (or swaping space):










//THIS ONE





Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:
















2)Run next:

public class example {
public static void main(String[] args) throws IgniteException {
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
ignite.cluster().active(true);

CacheConfiguration defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

defaultCacheCfg.setDataRegionName("Default_Region");

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 1000; i++) {
//remove old table cache just in case
cache.query(new SqlFieldsQuery(String.format(
"DROP TABLE TBL_%s", i)));
//create new table
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

ignite.cluster().active(false);
}
}
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-03-26 Thread aealexsandrov
Hi Fvyaba,

I investigated your example. In your code you are going to create new cache
every time when you are going to create new table. Every new cache will have
some memory overhead. Next code can help you to get the average allocated
memory:

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 100; i++) {
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

On my machine with default settings I got next:

>>> Memory Region Name: Default_Region
>>> Allocation Rate: 3419.9666
>>> Allocated Size Full: 840491008
>>> Allocated Size avg: 8489808
>>> Physical Memory Size: 840491008

So it's about 8mb per cache (so if you will have 3.2 GB then you can create
about 400 caches). I am not sure is it ok but you can do next to avoid
org.apache.ignite.IgniteCheckedException: Out of memory in data region:

1)Increase the max value of available off-heap memory:








 //HERE






2)Use persistence (or swaping space):










//THIS ONE





Read more about it you can here:

https://apacheignite.readme.io/docs/distributed-persistent-store
https://apacheignite.readme.io/v1.0/docs/off-heap-memory

Please try to test next code:

1) add this to your config:
















2)Run next:

public class example {
public static void main(String[] args) throws IgniteException {
try (Ignite ignite =
Ignition.start("examples/config/example-ignite.xml")) {
ignite.cluster().active(true);

CacheConfiguration defaultCacheCfg = new
CacheConfiguration<>("Default_cache").setSqlSchema("PUBLIC");

defaultCacheCfg.setDataRegionName("Default_Region");

try (IgniteCache cache =
ignite.getOrCreateCache(defaultCacheCfg)) {
for(int i = 1; i < 1000; i++) {
//remove old table cache just in case
cache.query(new SqlFieldsQuery(String.format(
"DROP TABLE TBL_%s", i)));
//create new table
cache.query(new SqlFieldsQuery(String.format(
"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY
KEY(id))", i)));
System.out.println("Count " + i + "
-");
for (DataRegionMetrics metrics :
ignite.dataRegionMetrics()) {
System.out.println(">>> Memory Region Name: " +
metrics.getName());
System.out.println(">>> Allocation Rate: " +
metrics.getAllocationRate());
System.out.println(">>> Allocated Size Full: " +
metrics.getTotalAllocatedSize());
System.out.println(">>> Allocated Size avg: " +
metrics.getTotalAllocatedSize() / i);
System.out.println(">>> Physical Memory Size: " +
metrics.getPhysicalMemorySize());
}
}
}

ignite.cluster().active(false);
}
}
}









--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re:Re: Re:Issue about ignite-sql limit of table quantity

2018-03-23 Thread fvyaba
Hi Andrei:
Thanks for your answer!
My laptop run on MacOS(16G RAM),I just ran a simple test, it seems 'table 
creation' cost much much more memory&time than 'data creation',and we got 
'IgniteOutOfMemoryException: Out of memory in data region [name=default, 
initSize=256.0 MiB, maxSize=3.2 GiB, persistenceEnabled=false]' in a few 
seconds, see below:
* table count < 400 (time cost: 74s, 'java process' memory cost:6.32G):

Ignite ignite = Ignition.start("config/example-ignite.xml");

try (IgniteCache cache = ignite.getOrCreateCache(cfg)) {

  longstart = System.currentTimeMillis();

  for (inti = 0; i < 400; i++) {

cache.query(new SqlFieldsQuery(String.format(

"CREATE TABLE TBL_%s (id BIGINT,uid VARCHAR,PRIMARY KEY(id))",

i)));

  }

  System.out.println(System.currentTimeMillis() - start);

}

* table count (400~500 or 500+)(got 'OOME' in a few seconds)


So I guest:
sql-table is a special one in ignite, not only it's cost of memory, but also 
it's way of creation, it's very expensive and it's not treated as first-class 
citizen like 'plain-cache'


is this a problem?think about a multi-tenant scenario,if some system make 
tenant-distinction on tbl-name level



At 2018-03-23 23:04:59, "aealexsandrov"  wrote:
>Hi Fvyaba,
>
>There is no information about it in documentation but according several
>places in the code I see that it isn't greater than int32.
>
>void ReadTableMetaVector(ignite::impl::binary::BinaryReaderImpl&
>reader, TableMetaVector& meta)
>{
>int32_t metaNum = reader.ReadInt32();
>
>meta.clear();
>meta.reserve(static_cast(metaNum));
>
>for (int32_t i = 0; i < metaNum; ++i)
>{
>meta.push_back(TableMeta());
>
>meta.back().Read(reader);
>}
>
>So all restrictions that you can see will be related to available memory on
>your nodes.
>
>Thank you,
>Andrei
>
>
>
>
>--
>Sent from: http://apache-ignite-users.70518.x6.nabble.com/