Re: Question

2018-08-07 Thread dkarachentsev
Hi,

It defines by AffinityFunction [1]. By default 1024 partitions, affinity
automatically calculates nodes that will keep required partitions and
minifies rebalancing when topology changes (nodes arrive or quit).

[1]https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: MySQL cache load causes java.sql.SQLException: GC overhead limit exceeded

2018-08-07 Thread Orel Weinstock (ExposeBox)
I've used the web-console generated LoadCaches file. From what I
understand, looking at the source code, this is not supposed to keep it
on-heap at all (and I've supplied ample off-heap space).

It "just worked" with a HUGE memory allocation, but I will optimize it
later.

On 7 August 2018 at 17:16, Ilya Kasnacheev 
wrote:

> Hello!
>
> This is likely caused by trying to keep all the table data in memory
> during data load. Can you share your code so that we could take a look?
>
> Regards,
>
>
> --
> Ilya Kasnacheev
>
> 2018-08-06 18:00 GMT+03:00 Orel Weinstock (ExposeBox) 
> :
>
>> Hi all,
>>
>> Changing the MAIN_CLASS env variable and tweaking the default heap size
>> (2.7GB) and default data region size (8GB), I'm trying to load a small
>> (<4GB) MySQL table into the cache and get a GC overhead limit error. Should
>> I increase memory? Is there a configuration I'm missing?
>>
>> Thanks,
>> --
>>
>> --
>> *Orel Weinstock*
>> Software Engineer
>> Email:o...@exposebox.com 
>> Website: www.exposebox.com
>>
>>
>


-- 

-- 
*Orel Weinstock*
Software Engineer
Email:o...@exposebox.com 
Website: www.exposebox.com


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-07 Thread Вячеслав Коптилин
Hi,

You can create a cache via CacheManager instance using Ignite
CacheConfiguration or MutableConfiguration:

public class ExampleNodeStartup {
public static void main(String[] args) throws Exception {
CachingProvider provider = Caching.getCachingProvider();
CacheManager manager = provider.getCacheManager();

Cache c1 = manager.createCache("my-test-cache-1", new
MutableConfiguration<>());

CacheConfiguration cfg = new
CacheConfiguration<>("my-test-cache-2");
cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setBackups(2);

Cache c2 = manager.createCache(cfg.getName(), cfg);
}
}

Moreover, you can provide your own Ignite configuration as well:

public class ExampleNodeStartup {
public static void main(String[] args) throws Exception {
URI igniteCfg =
"//projects//ignite//examples//config//example-cache.xml";

CachingProvider provider = Caching.getCachingProvider();
CacheManager manager = provider.getCacheManager(igniteCfg,
ExampleNodeStartup.class.getClassLoader());

Cache default = manager.getCache("default");
}
}

All these examples work as expected, without any errors/exceptions.

So, I owuld suggest creating a small project, that reproduces the issue you
mentioned, and upload it on GitHub
so that the community can take a look at this.

Thanks.

вт, 7 авг. 2018 г. в 20:45, daya airody :

> Hi slava,
>
> thanks for your comments. I am creating the cache directly using JCache API
> below:
> ---
> @Bean
> public JCacheManagerCustomizer cacheManagerCustomizer() {
> return cm -> {
> Configuration cacheConfiguration =
> createCacheConfiguration();
>
> if (cm.getCache("users") == null)
> cm.createCache("users", cacheConfiguration);
> if (cm.getCache("cannedReports") == null)
> cm.createCache("cannedReports",
> createCustomCacheConfiguration());
> };
> }
> --
> Even though I have created these caches using JCache API ( and not through
> Ignite API), when I restart my application, cache.getCacheManager() is
> returning null within JCacheMetrics constructor. This is a blocker.
>
> Any help is appreciated.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: values retrieved from the cache are wrapped with JdkDynamicAopProxy while using springboot and JCache

2018-08-07 Thread Павлухин Иван
Hi,

Looks like Spring itself wraps result into proxy. If you could provide a
reproducer it will help to find a reason faster.

2018-08-07 21:09 GMT+03:00 daya airody :

> Values retrieved from cache are wrapped with JdkDynamicAopProxy.  This
> throws
> below NPEs
>
> ---
> java.lang.NullPointerException: null
> at
> org.springframework.aop.framework.AdvisedSupport.
> getInterceptorsAndDynamicInterceptionAdvice(AdvisedSupport.java:481)
> at
> org.springframework.aop.framework.JdkDynamicAopProxy.
> invoke(JdkDynamicAopProxy.java:197)
> at com.sun.proxy.$Proxy255.getEmailAddress(Unknown Source)
> at
> com.partnertap.analytics.controller.AdminCannedController.getAllReps(
> AdminCannedController.java:51)
>
> ---
> I don't understand why cached values should be wrapped with proxies.
> JdkDynamicAopProxy uses methodCache, which is null when the value is
> retrieved from the cache.
>
> This is where I am caching the java method
> 
> @CacheResult(cacheName = "cannedReports")
> public List getAllReps(@CacheKey
> String
> managerId) {
> -
> In the object calling above method, I am trying to print, but getting NPE
> instead.
>
> 
> List allReps =
> reportsService.getAllReps(managerId);
> for (ReportsRepDetailsInterface repDetail : allReps) {
> logger.info("email->", repDetail.getEmailAddress());
> }
> -
>
> please help.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ivan Pavlukhin


values retrieved from the cache are wrapped with JdkDynamicAopProxy while using springboot and JCache

2018-08-07 Thread daya airody
Values retrieved from cache are wrapped with JdkDynamicAopProxy.  This throws
below NPEs

---
java.lang.NullPointerException: null
at
org.springframework.aop.framework.AdvisedSupport.getInterceptorsAndDynamicInterceptionAdvice(AdvisedSupport.java:481)
at
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:197)
at com.sun.proxy.$Proxy255.getEmailAddress(Unknown Source)
at
com.partnertap.analytics.controller.AdminCannedController.getAllReps(AdminCannedController.java:51)

---
I don't understand why cached values should be wrapped with proxies.
JdkDynamicAopProxy uses methodCache, which is null when the value is
retrieved from the cache.

This is where I am caching the java method

@CacheResult(cacheName = "cannedReports")
public List getAllReps(@CacheKey String
managerId) {
-
In the object calling above method, I am trying to print, but getting NPE
instead.


List allReps =
reportsService.getAllReps(managerId);
for (ReportsRepDetailsInterface repDetail : allReps) {
logger.info("email->", repDetail.getEmailAddress());
}
-

please help.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: While enabling JCache, JCacheMetrics is throwing NullPointerException in getCacheManager with spring boot

2018-08-07 Thread daya airody
Hi slava,

thanks for your comments. I am creating the cache directly using JCache API
below:
---
@Bean
public JCacheManagerCustomizer cacheManagerCustomizer() {
return cm -> {
Configuration cacheConfiguration =
createCacheConfiguration();

if (cm.getCache("users") == null) 
cm.createCache("users", cacheConfiguration);
if (cm.getCache("cannedReports") == null)
cm.createCache("cannedReports",
createCustomCacheConfiguration());
};
} 
--
Even though I have created these caches using JCache API ( and not through
Ignite API), when I restart my application, cache.getCacheManager() is
returning null within JCacheMetrics constructor. This is a blocker.

Any help is appreciated.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Question

2018-08-07 Thread romil kothawade
how ignite decides  how many partitions should be made and on which node
cache partitiones should be store,can u please explain as a new to ignite.


Re: remote filter of continous query.

2018-08-07 Thread Ilya Kasnacheev
Yes, you can always use Java filters since all Apache Ignite nodes contain
JVM.

Regards,

-- 
Ilya Kasnacheev

2018-08-07 17:22 GMT+03:00 Som Som <2av10...@gmail.com>:

> can i use both remote java and c# filters in this case or not?
>
> вт, 7 авг. 2018 г., 17:13 Ilya Kasnacheev :
>
>> Hello!
>>
>> My guess is that you need to start all of your nodes (including server
>> nodes) as C# nodes in order to use remote filters or a lot of other
>> features that run code remotely.
>>
>> You can probably even use Apache.Ignite.exe for that.
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-08-07 17:11 GMT+03:00 Som Som <2av10...@gmail.com>:
>>
>>> Hi.
>>>
>>> i'm trying to set up a remote filter for continous query using c# client
>>> and i see an error on the console window of my server node: ...platforms
>>> are not available... What could be the reason for that?
>>>
>>
>>


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-08-07 Thread ilya.kasnacheev
Hello!

I'm not aware of any other settings to avoid OOME for SQL queries.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OOME causing caches to be removed

2018-08-07 Thread ilya.kasnacheev
Hello!

My recommendation here is breaking this query down into ranges (by using
LIMIT or better yet, range WHERE condition), processing every range
separately. Otherwise I can see how Ignite will try to keep all data on heap
at some point.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Help needed with BinaryObjectException

2018-08-07 Thread ilya.kasnacheev
Hello!

You mentioned that you have client nodes in test setup. This means that data
has to be serialized to be sent from client to server.

If you only use server nodes, you can configure them in such fashion that
they never form a cluster but only function individually, and thus you
should avoid rolling-upgrade problems.

Anyway, Apache Ignite stores data off-heap so it has to serialize data when
storing it in cache.

Hope this helps,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Partitions distribution across nodes

2018-08-07 Thread akash shinde
Hi ,
I am loading cache in partition aware mode.I have started four nodes.
Out of these four node two nodes are loading only backup partitions and
other two nodes are loading only primary partitions.

As per my understanding each node should have backup and primary partition
both.

But in my cluster distributions of partitions looks like this

Node Primary partitions Backup partitions
NODE 1 518 0
NODE 2 0 498
NODE 3 506 0
NODE 4 0 503

*Configuration of Cache*

CacheConfiguration ipv4AssetGroupDetailCacheCfg = new CacheConfiguration<>(
CacheName.IPV4_ASSET_GROUP_DETAIL_CACHE.name());
ipv4AssetGroupDetailCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ipv4AssetGroupDetailCacheCfg.setWriteThrough(true);
ipv4AssetGroupDetailCacheCfg.setReadThrough(false);
ipv4AssetGroupDetailCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
ipv4AssetGroupDetailCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
ipv4AssetGroupDetailCacheCfg.setBackups(1);
ipv4AssetGroupDetailCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
IpV4AssetGroupData.class);

Factory storeFactory =
FactoryBuilder.factoryOf(IpV4AssetGroupCacheStore.class);
ipv4AssetGroupDetailCacheCfg.setCacheStoreFactory(storeFactory);
ipv4AssetGroupDetailCacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());

RendezvousAffinityFunction affinityFunction = new RendezvousAffinityFunction();
affinityFunction.setExcludeNeighbors(false);
ipv4AssetGroupDetailCacheCfg.setAffinity(affinityFunction);



*Could someone please advice why there is no balanced distribution of
primary and backup partitions?*

*Thanks,*
*Akash*


Re: How to do rolling updates with embedded Ignite and changing models

2018-08-07 Thread ilya.kasnacheev
Hello!

My expectation that you can also use Binarylizable classes with some forward
compatibility and avoid this problem.

You could also focus on using SQL/binary objects. It should be working, but
if it doesn't, please supply code snippets & exception traces.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: remote filter of continous query.

2018-08-07 Thread Som Som
can i use both remote java and c# filters in this case or not?

вт, 7 авг. 2018 г., 17:13 Ilya Kasnacheev :

> Hello!
>
> My guess is that you need to start all of your nodes (including server
> nodes) as C# nodes in order to use remote filters or a lot of other
> features that run code remotely.
>
> You can probably even use Apache.Ignite.exe for that.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-07 17:11 GMT+03:00 Som Som <2av10...@gmail.com>:
>
>> Hi.
>>
>> i'm trying to set up a remote filter for continous query using c# client
>> and i see an error on the console window of my server node: ...platforms
>> are not available... What could be the reason for that?
>>
>
>


Re: MySQL cache load causes java.sql.SQLException: GC overhead limit exceeded

2018-08-07 Thread Ilya Kasnacheev
Hello!

This is likely caused by trying to keep all the table data in memory during
data load. Can you share your code so that we could take a look?

Regards,


-- 
Ilya Kasnacheev

2018-08-06 18:00 GMT+03:00 Orel Weinstock (ExposeBox) :

> Hi all,
>
> Changing the MAIN_CLASS env variable and tweaking the default heap size
> (2.7GB) and default data region size (8GB), I'm trying to load a small
> (<4GB) MySQL table into the cache and get a GC overhead limit error. Should
> I increase memory? Is there a configuration I'm missing?
>
> Thanks,
> --
>
> --
> *Orel Weinstock*
> Software Engineer
> Email:o...@exposebox.com 
> Website: www.exposebox.com
>
>


Re: remote filter of continous query.

2018-08-07 Thread Ilya Kasnacheev
Hello!

My guess is that you need to start all of your nodes (including server
nodes) as C# nodes in order to use remote filters or a lot of other
features that run code remotely.

You can probably even use Apache.Ignite.exe for that.

Regards,

-- 
Ilya Kasnacheev

2018-08-07 17:11 GMT+03:00 Som Som <2av10...@gmail.com>:

> Hi.
>
> i'm trying to set up a remote filter for continous query using c# client
> and i see an error on the console window of my server node: ...platforms
> are not available... What could be the reason for that?
>


remote filter of continous query.

2018-08-07 Thread Som Som
Hi.

i'm trying to set up a remote filter for continous query using c# client
and i see an error on the console window of my server node: ...platforms
are not available... What could be the reason for that?


Re: S3 discovery and bridge networks

2018-08-07 Thread Ilya Kasnacheev
Hello!

Can you have a virtual network containing all of your nodes, so that their
internal addresses will work?

I have to admit I'm not a devops so I don't know any more specifics.

Regards,

-- 
Ilya Kasnacheev

2018-08-07 17:07 GMT+03:00 Dave Harvey :

> My understanding:  S3 discovery works because the container publishes its
> IP/port in an S3 bucket, and other nodes can read this to determine which
> nodes might be in the cluster.   When running in a container using a bridge
> network, the container does not know the external IP address that can be
> used to reach it, so it doesn't have enough information to publish it's IP
> address  in S3.
> I'm running in ECS, which starts containers automatically, so I have no
> means to pass in environment variables with additional information.
>
> For this to work, the container would need to be able to determine the
> external identity of its discovery port.   I can pass in the external port
> # that we map to, but not the IP address.
>
>
>
> On Mon, Aug 6, 2018 at 10:29 AM, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Have you tried to specify localAddress for communication and discovery
>> SPIs? If not, can you please elaborate with ifconfig information and stuff?
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-08-03 16:53 GMT+03:00 Dave Harvey :
>>
>>> I've been successfully running 2.5 on AWS ECS with host or AWSVPC
>>> networking for the Ignite containers.   Is there any way around the fact
>>> that with bridge networking, the Ignite node registers it's unmapped
>>> address on S3?
>>>
>>>
>>> *Disclaimer*
>>>
>>> The information contained in this communication from the sender is
>>> confidential. It is intended solely for use by the recipient and others
>>> authorized to receive it. If you are not the recipient, you are hereby
>>> notified that any disclosure, copying, distribution or taking action in
>>> relation of the contents of this information is strictly prohibited and may
>>> be unlawful.
>>>
>>> This email has been scanned for viruses and malware, and may have been
>>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>>> Service (SaaS) for business. Providing a *safer* and *more useful*
>>> place for your human generated data. Specializing in; Security, archiving
>>> and compliance. To find out more Click Here
>>> .
>>>
>>
>>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>


Re: S3 discovery and bridge networks

2018-08-07 Thread Dave Harvey
My understanding:  S3 discovery works because the container publishes its
IP/port in an S3 bucket, and other nodes can read this to determine which
nodes might be in the cluster.   When running in a container using a bridge
network, the container does not know the external IP address that can be
used to reach it, so it doesn't have enough information to publish it's IP
address  in S3.
I'm running in ECS, which starts containers automatically, so I have no
means to pass in environment variables with additional information.

For this to work, the container would need to be able to determine the
external identity of its discovery port.   I can pass in the external port
# that we map to, but not the IP address.



On Mon, Aug 6, 2018 at 10:29 AM, Ilya Kasnacheev 
wrote:

> Hello!
>
> Have you tried to specify localAddress for communication and discovery
> SPIs? If not, can you please elaborate with ifconfig information and stuff?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-03 16:53 GMT+03:00 Dave Harvey :
>
>> I've been successfully running 2.5 on AWS ECS with host or AWSVPC
>> networking for the Ignite containers.   Is there any way around the fact
>> that with bridge networking, the Ignite node registers it's unmapped
>> address on S3?
>>
>>
>> *Disclaimer*
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and others
>> authorized to receive it. If you are not the recipient, you are hereby
>> notified that any disclosure, copying, distribution or taking action in
>> relation of the contents of this information is strictly prohibited and may
>> be unlawful.
>>
>> This email has been scanned for viruses and malware, and may have been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>> Service (SaaS) for business. Providing a *safer* and *more useful* place
>> for your human generated data. Specializing in; Security, archiving and
>> compliance. To find out more Click Here
>> .
>>
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: c++ build from source

2018-08-07 Thread Igor Sapego
This is OK, you may ignore this message.

Configure scripts, generated by autoconf are trying to delete
"core" files during their work, as they expect that some operations
can generate coredump files on crash, so it prints this error when
it faces "core" directory. More details here: [1].

The short answer is - you can safely ignore this message.

[1] - https://lists.gnu.org/archive/html/autoconf/2012-12/msg1.html

Best Regards,
Igor


On Tue, Aug 7, 2018 at 4:24 PM Floris Van Nee 
wrote:

> Hi,
>
>
>
> I’m trying to build Apache Ignite C++ from source on Ubuntu. First, I
> downloaded the Ignite 2.6 source code and built the Java part. This was
> successful. Then, I went to the modules/platforms/cpp directory and ran:
>
> libtoolize && aclocal && autoheader && automake --add-missing &&
> autoreconf
>
>
>
> This completed successfully as well. After that, ./configure exited with
> the following error:
>
>
>
> config.status: executing depfiles commands
>
> config.status: executing libtool commands
>
> rm: cannot remove 'core': Is a directory
>
>
>
> (This cannot remove ‘core’ message is displayed more often in the output).
> It seems it is trying to remove the directory ‘core’, but I don’t know why.
> Has anyone seen this problem before?
>
>
>
> -Floris
>


c++ build from source

2018-08-07 Thread Floris Van Nee
Hi,

I'm trying to build Apache Ignite C++ from source on Ubuntu. First, I 
downloaded the Ignite 2.6 source code and built the Java part. This was 
successful. Then, I went to the modules/platforms/cpp directory and ran:
libtoolize && aclocal && autoheader && automake --add-missing && autoreconf

This completed successfully as well. After that, ./configure exited with the 
following error:

config.status: executing depfiles commands
config.status: executing libtool commands
rm: cannot remove 'core': Is a directory

(This cannot remove 'core' message is displayed more often in the output). It 
seems it is trying to remove the directory 'core', but I don't know why. Has 
anyone seen this problem before?

-Floris


Re: Slides of Ignite use cases

2018-08-07 Thread Mauricio Stekl
Hi all,
I have fixed the problem with the links. Now all of them are redirecting to the 
right place.


Best,
Mauricio Stekl


On Aug 7, 2018, 06:57 -0300, Ilya Kasnacheev , wrote:
> Hello!
>
> https://www.imcsummit.org/2017/us/sessions/implementation-investment-book-record-ibor-using-apache-ignitegridgain
>
> This link seems to be working.
>
> + dmagda@
>
> Is it possible to correct it from our side?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> > 2018-08-07 6:45 GMT+03:00 Lijun Cao <641507...@qq.com>:
> > > Hi,
> > >
> > > I want to view some slides of ignite uses cases but I found that the 
> > > slides’ link in Proven Use Cases for Apache 
> > > Ignite™(https://ignite.apache.org/provenusecases.html)  page is not 
> > > avaliable.
> > >
> > > How can I get the slides ?
> > >
> > > Regards,
> > >
> > > BOT
> > >
>


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
>From web console sources I see next mapping:

{"dbName": "BIT", "dbType": -7, "signed": {"javaType": "Boolean",
"primitiveType": "boolean"}},
{"dbName": "TINYINT", "dbType": -6,
"signed": {"javaType": "Byte", "primitiveType": "byte"},
"unsigned": {"javaType": "Short", "primitiveType": "short"}},
{"dbName": "SMALLINT", "dbType": 5,
"signed": {"javaType": "Short", "primitiveType": "short"},
"unsigned": {"javaType": "Integer", "primitiveType": "int"}},
{"dbName": "INTEGER", "dbType": 4,
"signed": {"javaType": "Integer", "primitiveType": "int"},
"unsigned": {"javaType": "Long", "primitiveType": "long"}},
{"dbName": "BIGINT", "dbType": -5, "signed": {"javaType": "Long",
"primitiveType": "long"}},
{"dbName": "FLOAT", "dbType": 6, "signed": {"javaType": "Float",
"primitiveType": "float"}},
{"dbName": "REAL", "dbType": 7, "signed": {"javaType": "Double",
"primitiveType": "double"}},
{"dbName": "DOUBLE", "dbType": 8, "signed": {"javaType": "Double",
"primitiveType": "double"}},
{"dbName": "NUMERIC", "dbType": 2, "signed": {"javaType":
"BigDecimal"}},
{"dbName": "DECIMAL", "dbType": 3, "signed": {"javaType":
"BigDecimal"}},
{"dbName": "CHAR", "dbType": 1, "signed": {"javaType": "String"}},
{"dbName": "VARCHAR", "dbType": 12, "signed": {"javaType": "String"}},
{"dbName": "LONGVARCHAR", "dbType": -1, "signed": {"javaType":
"String"}},
{"dbName": "DATE", "dbType": 91, "signed": {"javaType": "Date"}},
{"dbName": "TIME", "dbType": 92, "signed": {"javaType": "Time"}},
{"dbName": "TIMESTAMP", "dbType": 93, "signed": {"javaType":
"Timestamp"}},
{"dbName": "BINARY", "dbType": -2, "signed": {"javaType": "byte[]"}},
{"dbName": "VARBINARY", "dbType": -3, "signed": {"javaType": "byte[]"}},
{"dbName": "LONGVARBINARY", "dbType": -4, "signed": {"javaType":
"byte[]"}},
{"dbName": "NULL", "dbType": 0, "signed": {"javaType": "Object"}},
{"dbName": "OTHER", "dbType": , "signed": {"javaType": "Object"}},
{"dbName": "JAVA_OBJECT", "dbType": 2000, "signed": {"javaType":
"Object"}},
{"dbName": "DISTINCT", "dbType": 2001, "signed": {"javaType":
"Object"}},
{"dbName": "STRUCT", "dbType": 2002, "signed": {"javaType": "Object"}},
{"dbName": "ARRAY", "dbType": 2003, "signed": {"javaType": "Object"}},
{"dbName": "BLOB", "dbType": 2004, "signed": {"javaType": "Object"}},
{"dbName": "CLOB", "dbType": 2005, "signed": {"javaType": "String"}},
{"dbName": "REF", "dbType": 2006, "signed": {"javaType": "Object"}},
{"dbName": "DATALINK", "dbType": 70, "signed": {"javaType": "Object"}},
{"dbName": "BOOLEAN", "dbType": 16, "signed": {"javaType": "Boolean",
"primitiveType": "boolean"}},
{"dbName": "ROWID", "dbType": -8, "signed": {"javaType": "Object"}},
{"dbName": "NCHAR", "dbType": -15, "signed": {"javaType": "String"}},
{"dbName": "NVARCHAR", "dbType": -9, "signed": {"javaType": "String"}},
{"dbName": "LONGNVARCHAR", "dbType": -16, "signed": {"javaType":
"String"}},
{"dbName": "NCLOB", "dbType": 2011, "signed": {"javaType": "String"}},
{"dbName": "SQLXML", "dbType": 2009, "signed": {"javaType": "Object"}}

I guess that you can use it in your schemes.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
I am not fully sure but according to the specification of java.sql.Types you
can try to use next for Java objects:

https://docs.oracle.com/javase/7/docs/api/java/sql/Types.html#JAVA_OBJECT
https://docs.oracle.com/javase/7/docs/api/java/sql/Types.html#OTHER










BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread michal23849
Hi Anrei,

My goal is to map the data using CacheJdbcPojoStore and save the data that I
already have in Ignite to SQLServer.

The data model has embedded classes and I don't know how to map them.
Currently I got the following setup, which works fine, but that is only the
subset of data.



 
  


  
  
  
  
  
  
  
  
  

  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
 
  
  
  
  
  
  


  
  
  
  
  
  
  

  
  
  
  
  
  
  
  

  
  
  
  
  
  
  
  



  
  
  
  
  
  


Is it possible to map the firstType (class ListingCode) and the listings
(which is the collection of Listing object)?

Thanks,
Michal



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes

2018-08-07 Thread Вячеслав Коптилин
Hello,

> Hi It is having unique consistentId across cluster.
> Node1 ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520
> Node2 ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520
> Node 3(server2):   127.0.0.1:48500..48520 192.168.0.4:48500..48520
192.168.0.5:48500..48520

Hmm, all these strings are absolutely the same :), and it looks like that
it is the root cause of the issue.

Thanks.


вт, 7 авг. 2018 г. в 13:41, kvenkatramtreddy :

> Hi It is having unique consistentId across cluster. All nodes running for
> some time, it is happening after some time. Please see the discoverySpi and
> consistenId details below. *Node1 ConsistentId (Server 1):*
> 127.0.0.1:48500..48520 192.168.0.4:48500..48520 192.168.0.5:48500..48520 Node2
> ConsistentId (Server 1): 127.0.0.1:48500..48520 192.168.0.4:48500..48520
> 192.168.0.5:48500..48520 Node 3(server2): 127.0.0.1:48500..48520
> 192.168.0.4:48500..48520 192.168.0.5:48500..48520
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Ignite with POJO persistency in SQLServer

2018-08-07 Thread aealexsandrov
Hi,

You have pogo class - your.package.EquityClass wuth next fields:

Long equityID; 
private ListingCode firstCode; 
private String equityName; 
private String equityType; 
private String equityClass; 
private Set listings; 

 and you are going to store in into ignite. In this case, if you can do it
like next:






















equityID


















After that you can use it as next:

IgniteCache cache = ign.getOrCreateCache("CACHE_NAME");

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes

2018-08-07 Thread kvenkatramtreddy
Hi It is having unique consistentId across cluster. All nodes running for
some time, it is happening after some time.Please see the discoverySpi and
consistenId details below.
*Node1 ConsistentId (Server 1):*



 
127.0.0.1:48500..48520   
192.168.0.4:48500..48520   
192.168.0.5:48500..48520

Node2 ConsistentId (Server 1):



 
127.0.0.1:48500..48520   
192.168.0.4:48500..48520   
192.168.0.5:48500..48520

Node 3(server2):



 
127.0.0.1:48500..48520   
192.168.0.4:48500..48520   
192.168.0.5:48500..48520




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: big wal/archive file,but node file is small

2018-08-07 Thread aealexsandrov
Hi,

First of all read about the WAL:

https://apacheignite.readme.io/docs/write-ahead-log
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALHistorySize

WAL Work maximum used size: walSegmentSize * walSegments = 640Mb (default).
It's a size of your default WAL directory. You have big size of the wall
archive because it contains all the information about the previous segments.

Looks like at the moment it can't be removed safety until next issue will
not be fixed:

https://issues.apache.org/jira/browse/IGNITE-7912 it should be available in
Ignite 2.7.

However, you can try to optimize your WAL usage:

1)Disable the wall archive logging:

https://apacheignite.readme.io/docs/write-ahead-log#section-disabling-wal-archiving

2)You can try to turn off you WAL LOG during some operations. It will be
very useful to do it during the loading of the data.

You can turn off/on WAL logging for required cache using:

1)Java code
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#disableWal-java.lang.String-

2)SQL:

ALTER TABLE tablename NOLOGGING - turn off
ALTER TABLE tablename LOGGING - turn on

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite with POJO persistency in SQLServer

2018-08-07 Thread michal23849
Hi, 

I have my Ignite that has complex objects classes with not only generic Java
data types, but also other classes and arrays of classes. 

I managed to setup my PojoStore that successfully writes (write-behind) data
to SQL Server table for all generic data types. 

I see no examples or guidelines, how to map the embedded objects to separate
SQL Server tables (or the same SQL Server table). 

Please advise. 

My EquityClass has the following fields: 

Long equityID; 
private ListingCode firstCode; 
private String equityName; 
private String equityType; 
private String equityClass; 
private Set listings; 

And I have problem mapping the firstCode and the listings - array of Listing
objects. 

Is it possible? If yes, could you share any examples of mapping such
embedded objects into SQL DBs? 

Regards, 
Michal



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slides of Ignite use cases

2018-08-07 Thread Ilya Kasnacheev
Two more links are
https://www.imcsummit.org/2017/us/sessions/how-in-memory-solutions-can-assist-saas-integrations
and
https://www.imcsummit.org/2017/us/sessions/ignite-compute-grid-in-cloud

Regards,

-- 
Ilya Kasnacheev

2018-08-07 12:57 GMT+03:00 Ilya Kasnacheev :

> Hello!
>
> https://www.imcsummit.org/2017/us/sessions/implementation-investment-
> book-record-ibor-using-apache-ignitegridgain
>
> This link seems to be working.
>
> + dmagda@
>
> Is it possible to correct it from our side?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-07 6:45 GMT+03:00 Lijun Cao <641507...@qq.com>:
>
>> Hi,
>>
>> I want to view some slides of ignite uses cases but I found that the
>> slides’ link in *Proven Use Cases for Apache Ignite™(*
>> https://ignite.apache.org/provenusecases.html*)*  page is not avaliable.
>>
>> How can I get the slides ?
>>
>> Regards,
>>
>> BOT
>>
>>
>


Re: Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes

2018-08-07 Thread Вячеслав Коптилин
Hello,

It seems that there are nodes that have the same value of consistentId
property.
Please try to set IgniteConfiguration.setConsistentId to a unique value
cluster-wide.

Thanks.

вт, 7 авг. 2018 г. в 7:27, kvenkatramtreddy :

> Hi Team,
>
> I have configured my all caches replicated mode with native persistence
> enable and running it on 3 nodes. 2 nodes runs on same server and another
> node run on different server.
>
> I have configured unique consistentId for each node and unique IGNITE_HOME.
> I also configured
> 
> 
> class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
>  value="true"/>
> 
> 
>
> Receiving the Failed to add node to topology error after some time. Please
> could you help us to resolve this issue.
>
> igniteSamehashError.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1700/igniteSamehashError.txt>
>
>
> Thanks & Regards,
> Venkat
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Slides of Ignite use cases

2018-08-07 Thread Ilya Kasnacheev
Hello!

https://www.imcsummit.org/2017/us/sessions/implementation-investment-book-record-ibor-using-apache-ignitegridgain

This link seems to be working.

+ dmagda@

Is it possible to correct it from our side?

Regards,

-- 
Ilya Kasnacheev

2018-08-07 6:45 GMT+03:00 Lijun Cao <641507...@qq.com>:

> Hi,
>
> I want to view some slides of ignite uses cases but I found that the
> slides’ link in *Proven Use Cases for Apache Ignite™(*
> https://ignite.apache.org/provenusecases.html*)*  page is not avaliable.
>
> How can I get the slides ?
>
> Regards,
>
> BOT
>
>


Re: IgniteRepository Custom Config

2018-08-07 Thread Ilya Kasnacheev
Hello!

As per example, you can specify cache configuration when starting Ignite
node, refer to that cache by name in Spring Data annotation:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/springdata/PersonRepository.java
https://apacheignite-mix.readme.io/docs/spring-data

Regards,

-- 
Ilya Kasnacheev

2018-08-07 8:39 GMT+03:00 Jack Lever :

> Hi All,
>
> Can any tell me if there's a way to change the atomicity mode from ATOMIC
> to TRANSACTIONAL and set the cache mode to REPLICATED for a cache created
> with the @IgniteRepository annotation? Or otherwise generally configure a
> cache started with the IgnRepo annotation?
>
> Thanks,
> Jack.
>


Re: Ignite.NET how to cancel tasks

2018-08-07 Thread Вячеслав Коптилин
Hello,

Have you checked Ignite log files? Do they contain anything suspicious?
I just checked TeamCity and it seems that CancellationTest (that I
mentioned above) is OK.

Thanks,
S.


вт, 7 авг. 2018 г. в 9:47, Maksym Ieremenko :

> Hi Slava,
>
>
>
> >> > using (var ignite = Ignition.Start())
>
> >> Is it possible that Ignite node was closed before the cancellation
> request was processed by an instance of SimpleJob? Could you please check
> that fact?
>
> No.
>
>
>
> I double cheeked: the main thread hangs on
>
> cts.Cancel(); // CancellationTokenSource.Cancel()
>
>
>
> so, the next lines of code will never be reached, stack:
>
>
>
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod
>
> [Managed to Native Transition]
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(Apache.Ignite.Core.Impl.Unmanaged.Jni.GlobalRef
> obj, System.IntPtr methodId, long* argsPtr)
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(Apache.Ignite.Core.Impl.Unmanaged.Jni.GlobalRef
> target, int opType, long memPtr)
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(int
> type, long val)
>
>
> Apache.Ignite.Core.dll!Apache.Ignite.Core.Impl.Common.Future.OnTokenCancel()
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state)
>
> mscorlib.dll!System.Threading.CancellationCallbackInfo.ExecuteCallback()
>
> mscorlib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool
> throwOnFirstException = false)
>
> mscorlib.dll!System.Threading.CancellationTokenSource.NotifyCancellation(bool
> throwOnFirstException)
>
> mscorlib.dll!System.Threading.CancellationTokenSource.Cancel()
>
> CancellationDemo.exe!CancellationDemo.Program.RunAsync() Line 49
>
> [Resuming Async Method]
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
>
> mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run()
>
>
> mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0()
>
>
> mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0()
>
> mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action
> action, bool allowInlining, ref System.Threading.Tasks.Task currentTask =
> null)
>
> mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations()
>
> mscorlib.dll!System.Threading.Tasks.Task.TrySetResult(System.Threading.Tasks.VoidTaskResult
> result)
>
> mscorlib.dll!System.Threading.Tasks.Task.DelayPromise.Complete()
>
> mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext
> executionContext, System.Threading.ContextCallback callback, object state,
> bool preserveSyncCtx)
>
> mscorlib.dll!System.Threading.TimerQueueTimer.CallCallback()
>
> mscorlib.dll!System.Threading.TimerQueueTimer.Fire()
>
> mscorlib.dll!System.Threading.TimerQueue.FireNextTimers()
>
>
>
> Best regards,
>
> Max
>


Re: OOM on connecting to Ignite via JDBC

2018-08-07 Thread Denis Mekhanikov
Orel,

Could you show Ignite configuration? I'd like to make sure, that you
configured the JDBC port correctly.

Denis

пн, 6 авг. 2018 г. в 17:54, Orel Weinstock (ExposeBox) :

> This is the correct port - I've set it manually. I've used the same
> configuration with the web console for inserting SQL rows and it works
> great with write-through.
>
> On 6 August 2018 at 16:30, Павлухин Иван  wrote:
>
>> Hi Orel,
>>
>> Are you sure that correct port is used? By default 10800 port is used for
>> JDBC connections. You have 8080 in your command line.
>>
>> The error could be caused by reading unexpected input from server and
>> interpreting it as very huge packet size. Attempt to allocate buffer of
>> such size could simply run to OOME.
>>
>> 2018-08-06 11:56 GMT+03:00 Orel Weinstock (ExposeBox) > >:
>>
>>> I've followed the guide on setting up DBeaver to work with Ignite - I've
>>> set up a driver in DBeaver by selecting a class from the ignite-core jar,
>>> both version 2.6.0
>>>
>>> My cluster is up and running now (e.g. write-through works) that I've
>>> added the MySQL JDBC driver to the (web-console generated) pom.xml's
>>> dependencies, but I still can't connect to Ignite via DBeaver.
>>>
>>> On 6 August 2018 at 11:17, Denis Mekhanikov 
>>> wrote:
>>>
 Orel,

 JDBC driver fails on handshake for some reason.
 It fails with OOM when trying to allocate a byte array for the
 handshake message.
 But there is not much data transferred in it. Most probably, message
 size is read improperly.

 Do you use matching versions of JDBC driver and Ignite nodes?

 Denis


 вс, 5 авг. 2018 г. в 11:01, Orel Weinstock (ExposeBox) <
 o...@exposebox.com>:

> Hi all,
>
> Trying to get an Ignite cluster up and going for testing before taking
> it to production.
> I've set up Ignite 2.6 on a cluster with a single node on a Google
> Cloud Compute instance and I have the web console working as well.
>
> I've imported a table from MySQL and re-run the cluster with the
> resulting Docker image.
>
> Querying for the table via the web console proved fruitless, so I've
> switched to SQLLine (on the cluster itself). Still no cigar:
>
> main(SqlLine.java:265)moo@ignite:/home/moo$
> /usr/share/apache-ignite/bin/sqlline.sh --verbose=true -u
> jdbc:ignite:thin://127.0.0.1:8080issuing: !connect jdbc:ignite:thin://
> 127.0.0.1:8080 '' '' org.apache.ignite.IgniteJdbcThinDriverConnecting
> to jdbc:ignite:thin://127.0.0.1:8080java.lang.OutOfMemoryError: Java
> heap space at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:586)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:575)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.handshake(JdbcThinTcpIo.java:328)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:223)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:144)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.ensureConnected(JdbcThinConnection.java:148)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.(JdbcThinConnection.java:137)
> at
> org.apache.ignite.IgniteJdbcThinDriver.connect(IgniteJdbcThinDriver.java:157)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156) at
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
> at sqlline.Commands.connect(Commands.java:1095) at
> sqlline.Commands.connect(Commands.java:1001) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:791) at
> sqlline.SqlLine.initArgs(SqlLine.java:566) at
> sqlline.SqlLine.begin(SqlLine.java:643) at
> sqlline.SqlLine.start(SqlLine.java:373) at
> sqlline.SqlLine.main(SqlLine.java:265)
>
> Tried DBeaver - still OOM.
>
> Is there a way to get a list of all tables in the cache?
> Does anyone have any experience with this error? I can't tell if it's
> Ignite itself or just the JDBC client, though I'm leaning towards the
> client.
>
>
> --
>
> --
> *Orel Weinstock*
> Software Engineer
> Email:o...@exposebox.com 
> Website: www.exposebox.com
>
>
>>>
>>>
>>> --
>>>
>>> --
>>> *Orel Weinstock*
>>> Software Engineer
>>> Email:o...@exposebox.com 
>>> Website: www.exposebox.com
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin
>>
>
>
>
> --
>
> --
> *Orel Weinstock*
> Software Engineer

Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread Evgenii Zhuravlev
Can you share this project so community can reproduce it?

2018-08-07 11:04 GMT+03:00 monstereo :

> No i am not using spark, it is simply java project
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread monstereo
No i am not using spark, it is simply java project



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread Evgenii Zhuravlev
This means just that ignite doesn't see this file.

>when I run the jar file via:
>java -DIGNITE_HOME=somePath/toBin//apache-ignite-fabric-2.5.0-bin/ -jar
>DifferentHosts-1.0-SNAPSHOT.one-jar.jar

>give me this error:
>java.lang.ClassNotFoundException:
>org.apache.ignite.logger.java.JavaLoggerFileHandler

looks like in this case it sees this file, then you can use this approach.
Do you use spark?

2018-08-07 10:49 GMT+03:00 monstereo :

> same error
> when ı run ja file
> java -DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.5.0-bin/config
> -jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
> give me the same error Re: SEVERE: Failed to resolve default logging config
> file: config/java.util.logging.properties
>
> or this
> java -DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.
> 5.0-bin/config/
> -jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
>
> I have also copied logging.properties.config file to simple directory log
> java -DIGNITE_LOG_DIR=somePath/Desktop/log/
> -jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
>
> does not work
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread monstereo
same error
when ı run ja file
java -DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.5.0-bin/config
-jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
give me the same error Re: SEVERE: Failed to resolve default logging config
file: config/java.util.logging.properties

or this 
java -DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.5.0-bin/config/
-jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar

I have also copied logging.properties.config file to simple directory log
java -DIGNITE_LOG_DIR=somePath/Desktop/log/
-jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar

does not work



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread Evgenii Zhuravlev
-DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.
5.0-bin/config/java.util.logging.properties

DIGNITE_LOG_DIR should be a directory, not a full path to file with name

2018-08-07 10:32 GMT+03:00 monstereo :

> no it does not work
> there is java.util.logging.propeties file in my
> apache-ignite-fabric-2.5.0-bin/config/
>
> when I run the jar file via:
> java
> -DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.
> 5.0-bin/config/java.util.logging.properties
> -jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
> give me the same error SEVERE: Failed to resolve 
>
> when I run the jar file via:
> java -DIGNITE_HOME=somePath/toBin//apache-ignite-fabric-2.5.0-bin/ -jar
> DifferentHosts-1.0-SNAPSHOT.one-jar.jar
>
> give me this error:
> java.lang.ClassNotFoundException:
> org.apache.ignite.logger.java.JavaLoggerFileHandler
>
>
> As I said early, when I run in intellij , there is no problem
>
>
>
> ezhuravlev wrote
> > well, because you need to have config/java.util.logging.properties in
> > IGNITE_HOME directory, or you can configure path to this file explicitly
> > using IGNITE_LOG_DIR
> >
> > 2018-08-07 10:18 GMT+03:00 monstereo <
>
> > mehmetozanguven@
>
> > >:
> >
> >> thanks,
> >> I can solve this error via adding -DIGNITE_HOME in my vm option
> >> But when I convert to the jar file, can not see the -DIGNITE_HOME
> >> even if I say:
> >> java -DIGNITE_HOME=pathToBin -jar jarName
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Public thread pool starvation detected

2018-08-07 Thread Evgenii Zhuravlev
Hi,

What kind of compute jobs do you run? Do you start new jobs inside jobs?
Can you share thread dumps?

Evgenii

2018-08-07 1:48 GMT+03:00 boomi :

> Hello,
>
> We are having a possible deadlock issue with Apache Ignite .NET 2.5.0.  We
> have setup a cluster with 5-server nodes and 1-client node.  We try to
> execute ICompute action from the client node.  And we see in one server
> node
> log, there was single line in the log file,  indicates that some kind of
> thread pool starvation as below:
>
> Line 586: [22:14:34,834][WARNING][grid-timeout-worker-#23][IgniteKernal]
> Possible thread pool starvation detected (no task completed in last
> 3ms,
> is public thread pool size large enough?)
>
> And all nodes,including client node, not responsive, when this happens.
> We've been stuck with problem and any help to resolve this issue would be
> appreciated.
>
> We setup the cluster on a VM with 4-core cpus and 32G memory.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread monstereo
no it does not work
there is java.util.logging.propeties file in my
apache-ignite-fabric-2.5.0-bin/config/

when I run the jar file via:
java
-DIGNITE_LOG_DIR=somePath/toBin/apache-ignite-fabric-2.5.0-bin/config/java.util.logging.properties
-jar DifferentHosts-1.0-SNAPSHOT.one-jar.jar
give me the same error SEVERE: Failed to resolve 

when I run the jar file via:
java -DIGNITE_HOME=somePath/toBin//apache-ignite-fabric-2.5.0-bin/ -jar
DifferentHosts-1.0-SNAPSHOT.one-jar.jar

give me this error:
java.lang.ClassNotFoundException:
org.apache.ignite.logger.java.JavaLoggerFileHandler


As I said early, when I run in intellij , there is no problem 



ezhuravlev wrote
> well, because you need to have config/java.util.logging.properties in
> IGNITE_HOME directory, or you can configure path to this file explicitly
> using IGNITE_LOG_DIR
> 
> 2018-08-07 10:18 GMT+03:00 monstereo <

> mehmetozanguven@

> >:
> 
>> thanks,
>> I can solve this error via adding -DIGNITE_HOME in my vm option
>> But when I convert to the jar file, can not see the -DIGNITE_HOME
>> even if I say:
>> java -DIGNITE_HOME=pathToBin -jar jarName
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread Evgenii Zhuravlev
well, because you need to have config/java.util.logging.properties in
IGNITE_HOME directory, or you can configure path to this file explicitly
using IGNITE_LOG_DIR

2018-08-07 10:18 GMT+03:00 monstereo :

> thanks,
> I can solve this error via adding -DIGNITE_HOME in my vm option
> But when I convert to the jar file, can not see the -DIGNITE_HOME
> even if I say:
> java -DIGNITE_HOME=pathToBin -jar jarName
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties

2018-08-07 Thread monstereo
thanks,
I can solve this error via adding -DIGNITE_HOME in my vm option
But when I convert to the jar file, can not see the -DIGNITE_HOME
even if I say:
java -DIGNITE_HOME=pathToBin -jar jarName



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Caused by: java.lang.ClassNotFoundException: org.apache.ignite.logger.slf4j.Slf4jLogger

2018-08-07 Thread Evgenii Zhuravlev
I've already answered that question too

2018-08-07 10:15 GMT+03:00 monstereo :

> I have solved this one
> I am looking for an answer
> this
>  Failed-to-resolve-default-logging-config-file-config-
> java-util-logging-properties-td23212.html>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Caused by: java.lang.ClassNotFoundException: org.apache.ignite.logger.slf4j.Slf4jLogger

2018-08-07 Thread monstereo
I have solved this one 
I am looking for an answer 
this

  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/