RE: TLS/SSL using Ignite .NET Nodes

2023-05-30 Thread satyajit.mandal.barclays.com via user
Hi  Team,

Below  configuration  works  with  .NET nodes.

Thanks
Satyajit



From: Mandal, Satyajit: IT (PUN)
Sent: Tuesday, May 30, 2023 7:10 PM
To: user@ignite.apache.org
Subject: TLS/SSL using Ignite .NET Nodes

Hi  Team,

Do  we  have  any  example  for  TLS/SSL  settings  for  Thick  server node ( 
.NET). Can  we  provide  below  protocol  for  .NET server  node  too?

  var factory = new SslContextFactory();
factory.TrustStoreFilePath = "C:\\Cert\\trust.jks";
factory.KeyStoreFilePath = "C:\\Cert\\node.jks";
factory.TrustStorePassword = "xxx";
factory.KeyStorePassword = "yyy";
factory.Protocol = "TLSv1.3";


Regards
Satyajit

Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc, Barclays Bank plc and 
Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays Bank 
plc are authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. Clydesdale 
Financial Services Limited is authorised and regulated by the Financial Conduct 
Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
system. Do not copy, use, disclose or otherwise act on any part of this email 
or its attachments.

Internet communications are not guaranteed to be secure or virus-free. The 
Barclays group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any internet communications by 
any third party, or from the transmission of any viruses. Replies to this email 
may be monitored by the Barclays group for operational or business reasons.

Any opinion or other information in this email or its attachments that does not 
relate to the business of the Barclays group is personal to the sender and is 
not given or endorsed by the Barclays group.

Unless specifically indicated, this e-mail is not an offer to buy or sell or a 
solicitation to buy or sell any securities, investment products or other 
financial product or service, an official confirmation of any transaction, or 
an official statement of Barclays.


TLS/SSL using Ignite .NET Nodes

2023-05-30 Thread satyajit.mandal.barclays.com via user
Hi  Team,

Do  we  have  any  example  for  TLS/SSL  settings  for  Thick  server node ( 
.NET). Can  we  provide  below  protocol  for  .NET server  node  too?

  var factory = new SslContextFactory();
factory.TrustStoreFilePath = "C:\\Cert\\trust.jks";
factory.KeyStoreFilePath = "C:\\Cert\\node.jks";
factory.TrustStorePassword = "xxx";
factory.KeyStorePassword = "yyy";
factory.Protocol = "TLSv1.3";


Regards
Satyajit

Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc, Barclays Bank plc and 
Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays Bank 
plc are authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. Clydesdale 
Financial Services Limited is authorised and regulated by the Financial Conduct 
Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
system. Do not copy, use, disclose or otherwise act on any part of this email 
or its attachments.

Internet communications are not guaranteed to be secure or virus-free. The 
Barclays group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any internet communications by 
any third party, or from the transmission of any viruses. Replies to this email 
may be monitored by the Barclays group for operational or business reasons.

Any opinion or other information in this email or its attachments that does not 
relate to the business of the Barclays group is personal to the sender and is 
not given or endorsed by the Barclays group.

Unless specifically indicated, this e-mail is not an offer to buy or sell or a 
solicitation to buy or sell any securities, investment products or other 
financial product or service, an official confirmation of any transaction, or 
an official statement of Barclays.


Re: Storage Exception using Ignite

2023-03-02 Thread Gianluca Bonetti
Hello

It might be a number of things starting from no space available on your
disk, partition, and/or permissions and/or other constraints on your
workstation imposed by your organization.
I suggest you check this with your internal support team to exclude any
permission issue on your development machine.

I am not affiliated to Ignite/GridGain but I strongly encourage you to
subscribe for a support plan if you need support so frequently and so
quickly, I am confident your organization has the budget for it.

Cheers
Gianluca


On Thu, 2 Mar 2023 at 08:47, satyajit.mandal.barclays.com via user <
user@ignite.apache.org> wrote:

> Hi  team,
>
>
>
> Any  update  on  this  what might  be  causing  this  issue  while
> storing  data in  Ignite  with  persistence  enabled?
>
>
>
> class o.a.i.i.processors.cache.persistence.StorageException: Failed to
> initialize partition file
>
>
>
> Thanks
>
> Satyajit
>
>
>
>
>
>
>
> *From:* Mandal, Satyajit: IT (PUN)
> *Sent:* Thursday, March 2, 2023 7:00 AM
> *To:* 'user@ignite.apache.org' 
> *Subject:* Storage Exception using Ignite
>
>
>
> Hi  Team,
>
>
>
> Am  getting below  storage  exception while  loading  a  heavy  table
> into  cache. Am running  Ignite  on  windows .
>
>
>
> 17:29:29,731][SEVERE][exchange-worker-#62][] JVM will be halted
> immediately due to the failure: [failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.StorageException: Failed to initialize
> partition file:
> D:\Data\apache-ignite\work\db\node00-382c3dd6-8cca-4469-b041-bd2f24c31ab3\cache-unittest.RuleGroup\index.bin]]
>
>
>
> Could  you  please  suggest  if  any  settings  is  missing ?
>
>
>
> Thanks
>
> Satyajit
>
> Barclays Execution Services Limited registered in England. Registered No.
> 1767980. Registered office: 1 Churchill Place, London, E14 5HP
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc, Barclays Bank plc and
> Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays
> Bank plc are authorised by the Prudential Regulation Authority and
> regulated by the Financial Conduct Authority and the Prudential Regulation
> Authority. Clydesdale Financial Services Limited is authorised and
> regulated by the Financial Conduct Authority.
>
> This email and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this email
> in error, please notify the sender and immediately delete it and any
> attachments from your system. Do not copy, use, disclose or otherwise act
> on any part of this email or its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this email may be monitored by the Barclays group for operational or
> business reasons.
>
> Any opinion or other information in this email or its attachments that
> does not relate to the business of the Barclays group is personal to the
> sender and is not given or endorsed by the Barclays group.
>
> Unless specifically indicated, this e-mail is not an offer to buy or sell
> or a solicitation to buy or sell any securities, investment products or
> other financial product or service, an official confirmation of any
> transaction, or an official statement of Barclays.
>


Re: Storage Exception using Ignite

2023-03-02 Thread Stephen Darlington
This is a community forum. If you need support with SLAs there are commercial 
options available.

I think you’ll need to share more of your stack trace for someone to determine 
what the issue is.

> On 2 Mar 2023, at 08:47, satyajit.mandal.barclays.com via user 
>  wrote:
> 
> Hi  team,
>  
> Any  update  on  this  what might  be  causing  this  issue  while  storing  
> data in  Ignite  with  persistence  enabled?
>  
> class o.a.i.i.processors.cache.persistence.StorageException: Failed to 
> initialize partition file
>  
> Thanks
> Satyajit
>  
>  
>  
> From: Mandal, Satyajit: IT (PUN) 
> Sent: Thursday, March 2, 2023 7:00 AM
> To: 'user@ignite.apache.org' 
> Subject: Storage Exception using Ignite
>  
> Hi  Team,
>  
> Am  getting below  storage  exception while  loading  a  heavy  table  into  
> cache. Am running  Ignite  on  windows .
>  
> 17:29:29,731][SEVERE][exchange-worker-#62][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, 
> err=class o.a.i.i.processors.cache.persistence.StorageException: Failed to 
> initialize partition file: 
> D:\Data\apache-ignite\work\db\node00-382c3dd6-8cca-4469-b041-bd2f24c31ab3\cache-unittest.RuleGroup\index.bin]]
>  
> Could  you  please  suggest  if  any  settings  is  missing ?
>  
> Thanks
> Satyajit
> 
> 



RE: Storage Exception using Ignite

2023-03-02 Thread satyajit.mandal.barclays.com via user
Hi  team,

Any  update  on  this  what might  be  causing  this  issue  while  storing  
data in  Ignite  with  persistence  enabled?

class o.a.i.i.processors.cache.persistence.StorageException: Failed to 
initialize partition file

Thanks
Satyajit



From: Mandal, Satyajit: IT (PUN)
Sent: Thursday, March 2, 2023 7:00 AM
To: 'user@ignite.apache.org' 
Subject: Storage Exception using Ignite

Hi  Team,

Am  getting below  storage  exception while  loading  a  heavy  table  into  
cache. Am running  Ignite  on  windows .

17:29:29,731][SEVERE][exchange-worker-#62][] JVM will be halted immediately due 
to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.StorageException: Failed to initialize 
partition file: 
D:\Data\apache-ignite\work\db\node00-382c3dd6-8cca-4469-b041-bd2f24c31ab3\cache-unittest.RuleGroup\index.bin]]

Could  you  please  suggest  if  any  settings  is  missing ?

Thanks
Satyajit

Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc, Barclays Bank plc and 
Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays Bank 
plc are authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. Clydesdale 
Financial Services Limited is authorised and regulated by the Financial Conduct 
Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
system. Do not copy, use, disclose or otherwise act on any part of this email 
or its attachments.

Internet communications are not guaranteed to be secure or virus-free. The 
Barclays group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any internet communications by 
any third party, or from the transmission of any viruses. Replies to this email 
may be monitored by the Barclays group for operational or business reasons.

Any opinion or other information in this email or its attachments that does not 
relate to the business of the Barclays group is personal to the sender and is 
not given or endorsed by the Barclays group.

Unless specifically indicated, this e-mail is not an offer to buy or sell or a 
solicitation to buy or sell any securities, investment products or other 
financial product or service, an official confirmation of any transaction, or 
an official statement of Barclays.


Storage Exception using Ignite

2023-03-01 Thread satyajit.mandal.barclays.com via user
Hi  Team,

Am  getting below  storage  exception while  loading  a  heavy  table  into  
cache. Am running  Ignite  on  windows .

17:29:29,731][SEVERE][exchange-worker-#62][] JVM will be halted immediately due 
to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.StorageException: Failed to initialize 
partition file: 
D:\Data\apache-ignite\work\db\node00-382c3dd6-8cca-4469-b041-bd2f24c31ab3\cache-unittest.RuleGroup\index.bin]]

Could  you  please  suggest  if  any  settings  is  missing ?

Thanks
Satyajit

Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc, Barclays Bank plc and 
Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays Bank 
plc are authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. Clydesdale 
Financial Services Limited is authorised and regulated by the Financial Conduct 
Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
system. Do not copy, use, disclose or otherwise act on any part of this email 
or its attachments.

Internet communications are not guaranteed to be secure or virus-free. The 
Barclays group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any internet communications by 
any third party, or from the transmission of any viruses. Replies to this email 
may be monitored by the Barclays group for operational or business reasons.

Any opinion or other information in this email or its attachments that does not 
relate to the business of the Barclays group is personal to the sender and is 
not given or endorsed by the Barclays group.

Unless specifically indicated, this e-mail is not an offer to buy or sell or a 
solicitation to buy or sell any securities, investment products or other 
financial product or service, an official confirmation of any transaction, or 
an official statement of Barclays.


Re: ClassCastException while using ignite service proxy

2022-09-04 Thread Hitesh Nandwana
Unsubscribe

On Thu, Sep 1, 2022, 1:24 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> If I move the deployment to the server side, the exact same code works in
> my environment. Same if I deploy in XML rather than code.
>
> Where are you getting the exception? If you’re seeing that on the client
> side, something very weird is happening. The client shouldn’t have any need
> for the implementation. What does your node filter do? Does it work if you
> disable it?
>
> On 1 Sep 2022, at 06:34, Surinder Mehra  wrote:
>
> Hi Stephen,
> I see you are deploying service from the same client node where proxy is
> obtained.
> In my setup, I have deployed service through ignite config on server start
> and try to create a client later and hence the proxy. It works when I try
> to obtain a proxy on the server node. But when I start a client node and
> try to obtain service instance through proxy, it throws this exception
> mentioned above
>
> On Wed, Aug 31, 2022 at 6:13 PM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> You’ll need to share more of your code and configuration. As far as I can
>> tell, it works. This is my entire code/configuration, using Ignite 2.11.1
>> and Java 11.0.16.1+1.
>>
>> var igniteConfiguration = new IgniteConfiguration()
>> .setPeerClassLoadingEnabled(true)
>> .setClientMode(true);
>> try (var ignite = Ignition.start(igniteConfiguration)) {
>> var cfg = new ServiceConfiguration()
>> .setName("MyService")
>> .setTotalCount(1)
>> .setMaxPerNodeCount(1)
>> .setNodeFilter(x -> !x.isClient())
>> .setService(new MyServiceImpl());
>> ignite.services().deploy(cfg);
>>
>>   var s = ignite.services().serviceProxy("MyService", MyService.class, 
>> false);
>> s.sayHello();
>> }
>>
>> public interface MyService {
>> public void sayHello();
>> }
>>
>> public class MyServiceImpl implements MyService, Service {
>> @Override
>> public void cancel(ServiceContext serviceContext) {
>>
>> }
>>
>> @Override
>> public void init(ServiceContext serviceContext) throws Exception {
>>
>> }
>>
>> @Override
>> public void execute(ServiceContext serviceContext) throws Exception {
>>
>> }
>>
>> @Override
>> public void sayHello() {
>> System.out.println("Hello, world.");
>> }
>> }
>>
>> On 31 Aug 2022, at 04:17, Surinder Mehra  wrote:
>>
>> Please find below
>> ignite version: apache-ignite-2.11.1
>> VM information: OpenJDK Runtime Environment 11.0.15+9-LTS
>>
>> On Wed, Aug 31, 2022 at 12:12 AM Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> Which version of Ignite? Which version of Java?
>>>
>>> On 30 Aug 2022, at 13:40, Surinder Mehra  wrote:
>>>
>>>
>>> 
>>> Hi Stephen ,
>>>  yes that is implemented correctly and it's running on server nodes as
>>> well. Somehow it doesn't work when accessed through proxy
>>>
>>> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington <
>>> stephen.darling...@gridgain.com> wrote:
>>>
>>>> Your service needs to implement org.apache.ignite.services.Service.
>>>>
>>>> > On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
>>>> >
>>>> > Hi,
>>>> > can you help me find out the reason for this exception in thick
>>>> client while getting instance of ignite service:
>>>> >
>>>> > getIgnite()
>>>> > .services()
>>>> > .serviceProxy("sampleService", SampleService.class, false)
>>>> >
>>>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be
>>>> cast to class com.test.ignite.stuff.services.SampleServiceImpl
>>>> (com.sun.proxy.$Proxy148 and
>>>> com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of
>>>> loader 'app')
>>>> >
>>>> > interface SampleService{
>>>> >
>>>> > }
>>>> >
>>>> > class SampleServiceImpl implements SampleService{
>>>> >
>>>> > }
>>>> >
>>>> > ignite config:
>>>> >
>>>> > 
>>>> >   
>>>> > 
>>>> >   
>>>> >   
>>>> >   
>>>> >   
>>>> > >>> class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>>>> >   
>>>> >   
>>>> > >>> class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>>>> >   
>>>> > 
>>>> >   
>>>> > 
>>>> >
>>>> >
>>>> >
>>>>
>>>>
>>
>


Re: ClassCastException while using ignite service proxy

2022-09-01 Thread Stephen Darlington
If I move the deployment to the server side, the exact same code works in my 
environment. Same if I deploy in XML rather than code.

Where are you getting the exception? If you’re seeing that on the client side, 
something very weird is happening. The client shouldn’t have any need for the 
implementation. What does your node filter do? Does it work if you disable it?

> On 1 Sep 2022, at 06:34, Surinder Mehra  wrote:
> 
> Hi Stephen,
> I see you are deploying service from the same client node where proxy is 
> obtained.
> In my setup, I have deployed service through ignite config on server start 
> and try to create a client later and hence the proxy. It works when I try to 
> obtain a proxy on the server node. But when I start a client node and try to 
> obtain service instance through proxy, it throws this exception mentioned 
> above
> 
> On Wed, Aug 31, 2022 at 6:13 PM Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote:
> You’ll need to share more of your code and configuration. As far as I can 
> tell, it works. This is my entire code/configuration, using Ignite 2.11.1 and 
> Java 11.0.16.1+1.
> 
> var igniteConfiguration = new IgniteConfiguration()
> .setPeerClassLoadingEnabled(true)
> .setClientMode(true);
> try (var ignite = Ignition.start(igniteConfiguration)) {
> var cfg = new ServiceConfiguration()
> .setName("MyService")
> .setTotalCount(1)
> .setMaxPerNodeCount(1)
> .setNodeFilter(x -> !x.isClient())
> .setService(new MyServiceImpl());
> ignite.services().deploy(cfg);
> var s = ignite.services().serviceProxy("MyService", 
> MyService.class, false);
> s.sayHello();
> }
> 
> public interface MyService {
> public void sayHello();
> }
> 
> public class MyServiceImpl implements MyService, Service {
> @Override
> public void cancel(ServiceContext serviceContext) {
> 
> }
> 
> @Override
> public void init(ServiceContext serviceContext) throws Exception {
> 
> }
> 
> @Override
> public void execute(ServiceContext serviceContext) throws Exception {
> 
> }
> 
> @Override
> public void sayHello() {
> System.out.println("Hello, world.");
> }
> }
> 
>> On 31 Aug 2022, at 04:17, Surinder Mehra > <mailto:redni...@gmail.com>> wrote:
>> 
>> Please find below
>> ignite version: apache-ignite-2.11.1
>> VM information: OpenJDK Runtime Environment 11.0.15+9-LTS
>> 
>> On Wed, Aug 31, 2022 at 12:12 AM Stephen Darlington 
>> mailto:stephen.darling...@gridgain.com>> 
>> wrote:
>> Which version of Ignite? Which version of Java?
>> 
>> On 30 Aug 2022, at 13:40, Surinder Mehra > <mailto:redni...@gmail.com>> wrote:
>>> 
>>> 
>>> Hi Stephen ,
>>>  yes that is implemented correctly and it's running on server nodes as 
>>> well. Somehow it doesn't work when accessed through proxy
>>> 
>>> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington 
>>> mailto:stephen.darling...@gridgain.com>> 
>>> wrote:
>>> Your service needs to implement org.apache.ignite.services.Service.
>>> 
>>> > On 30 Aug 2022, at 12:40, Surinder Mehra >> > <mailto:redni...@gmail.com>> wrote:
>>> > 
>>> > Hi,
>>> > can you help me find out the reason for this exception in thick client 
>>> > while getting instance of ignite service: 
>>> > 
>>> > getIgnite()
>>> > .services()
>>> > .serviceProxy("sampleService", SampleService.class, false) 
>>> > 
>>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be 
>>> > cast to class com.test.ignite.stuff.services.SampleServiceImpl 
>>> > (com.sun.proxy.$Proxy148 and 
>>> > com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of 
>>> > loader 'app')
>>> > 
>>> > interface SampleService{
>>> > 
>>> > }
>>> > 
>>> > class SampleServiceImpl implements SampleService{
>>> > 
>>> > }
>>> > 
>>> > ignite config:
>>> > 
>>> > 
>>> >   
>>> > 
>>> >   
>>> >   
>>> >   
>>> >   
>>> > >> > class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>>> >   
>>> >   
>>> > >> > class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>>> >   
>>> > 
>>> >   
>>> > 
>>> > 
>>> > 
>>> > 
>>> 
> 



Re: ClassCastException while using ignite service proxy

2022-08-31 Thread Surinder Mehra
Hi Stephen,
I see you are deploying service from the same client node where proxy is
obtained.
In my setup, I have deployed service through ignite config on server start
and try to create a client later and hence the proxy. It works when I try
to obtain a proxy on the server node. But when I start a client node and
try to obtain service instance through proxy, it throws this exception
mentioned above

On Wed, Aug 31, 2022 at 6:13 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> You’ll need to share more of your code and configuration. As far as I can
> tell, it works. This is my entire code/configuration, using Ignite 2.11.1
> and Java 11.0.16.1+1.
>
> var igniteConfiguration = new IgniteConfiguration()
> .setPeerClassLoadingEnabled(true)
> .setClientMode(true);
> try (var ignite = Ignition.start(igniteConfiguration)) {
> var cfg = new ServiceConfiguration()
> .setName("MyService")
> .setTotalCount(1)
> .setMaxPerNodeCount(1)
> .setNodeFilter(x -> !x.isClient())
> .setService(new MyServiceImpl());
> ignite.services().deploy(cfg);
>
>   var s = ignite.services().serviceProxy("MyService", MyService.class, false);
> s.sayHello();
> }
>
> public interface MyService {
> public void sayHello();
> }
>
> public class MyServiceImpl implements MyService, Service {
> @Override
> public void cancel(ServiceContext serviceContext) {
>
> }
>
> @Override
> public void init(ServiceContext serviceContext) throws Exception {
>
> }
>
> @Override
> public void execute(ServiceContext serviceContext) throws Exception {
>
> }
>
> @Override
> public void sayHello() {
> System.out.println("Hello, world.");
> }
> }
>
> On 31 Aug 2022, at 04:17, Surinder Mehra  wrote:
>
> Please find below
> ignite version: apache-ignite-2.11.1
> VM information: OpenJDK Runtime Environment 11.0.15+9-LTS
>
> On Wed, Aug 31, 2022 at 12:12 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> Which version of Ignite? Which version of Java?
>>
>> On 30 Aug 2022, at 13:40, Surinder Mehra  wrote:
>>
>>
>> 
>> Hi Stephen ,
>>  yes that is implemented correctly and it's running on server nodes as
>> well. Somehow it doesn't work when accessed through proxy
>>
>> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> Your service needs to implement org.apache.ignite.services.Service.
>>>
>>> > On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
>>> >
>>> > Hi,
>>> > can you help me find out the reason for this exception in thick client
>>> while getting instance of ignite service:
>>> >
>>> > getIgnite()
>>> > .services()
>>> > .serviceProxy("sampleService", SampleService.class, false)
>>> >
>>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be
>>> cast to class com.test.ignite.stuff.services.SampleServiceImpl
>>> (com.sun.proxy.$Proxy148 and
>>> com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of
>>> loader 'app')
>>> >
>>> > interface SampleService{
>>> >
>>> > }
>>> >
>>> > class SampleServiceImpl implements SampleService{
>>> >
>>> > }
>>> >
>>> > ignite config:
>>> >
>>> > 
>>> >   
>>> > 
>>> >   
>>> >   
>>> >   
>>> >   
>>> > >> class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>>> >   
>>> >   
>>> > >> class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>>> >   
>>> > 
>>> >   
>>> > 
>>> >
>>> >
>>> >
>>>
>>>
>


Re: ClassCastException while using ignite service proxy

2022-08-31 Thread Stephen Darlington
You’ll need to share more of your code and configuration. As far as I can tell, 
it works. This is my entire code/configuration, using Ignite 2.11.1 and Java 
11.0.16.1+1.

var igniteConfiguration = new IgniteConfiguration()
.setPeerClassLoadingEnabled(true)
.setClientMode(true);
try (var ignite = Ignition.start(igniteConfiguration)) {
var cfg = new ServiceConfiguration()
.setName("MyService")
.setTotalCount(1)
.setMaxPerNodeCount(1)
.setNodeFilter(x -> !x.isClient())
.setService(new MyServiceImpl());
ignite.services().deploy(cfg);
var s = ignite.services().serviceProxy("MyService", 
MyService.class, false);
s.sayHello();
}

public interface MyService {
public void sayHello();
}

public class MyServiceImpl implements MyService, Service {
@Override
public void cancel(ServiceContext serviceContext) {

}

@Override
public void init(ServiceContext serviceContext) throws Exception {

}

@Override
public void execute(ServiceContext serviceContext) throws Exception {

}

@Override
public void sayHello() {
System.out.println("Hello, world.");
}
}

> On 31 Aug 2022, at 04:17, Surinder Mehra  wrote:
> 
> Please find below
> ignite version: apache-ignite-2.11.1
> VM information: OpenJDK Runtime Environment 11.0.15+9-LTS
> 
> On Wed, Aug 31, 2022 at 12:12 AM Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote:
> Which version of Ignite? Which version of Java?
> 
> On 30 Aug 2022, at 13:40, Surinder Mehra  <mailto:redni...@gmail.com>> wrote:
>> 
>> 
>> Hi Stephen ,
>>  yes that is implemented correctly and it's running on server nodes as well. 
>> Somehow it doesn't work when accessed through proxy
>> 
>> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington 
>> mailto:stephen.darling...@gridgain.com>> 
>> wrote:
>> Your service needs to implement org.apache.ignite.services.Service.
>> 
>> > On 30 Aug 2022, at 12:40, Surinder Mehra > > <mailto:redni...@gmail.com>> wrote:
>> > 
>> > Hi,
>> > can you help me find out the reason for this exception in thick client 
>> > while getting instance of ignite service: 
>> > 
>> > getIgnite()
>> > .services()
>> > .serviceProxy("sampleService", SampleService.class, false) 
>> > 
>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be cast 
>> > to class com.test.ignite.stuff.services.SampleServiceImpl 
>> > (com.sun.proxy.$Proxy148 and 
>> > com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of 
>> > loader 'app')
>> > 
>> > interface SampleService{
>> > 
>> > }
>> > 
>> > class SampleServiceImpl implements SampleService{
>> > 
>> > }
>> > 
>> > ignite config:
>> > 
>> > 
>> >   
>> > 
>> >   
>> >   
>> >   
>> >   
>> > > > class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>> >   
>> >   
>> > > > class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>> >   
>> > 
>> >   
>> > 
>> > 
>> > 
>> > 
>> 



Re: ClassCastException while using ignite service proxy

2022-08-30 Thread Surinder Mehra
Please find below
ignite version: apache-ignite-2.11.1
VM information: OpenJDK Runtime Environment 11.0.15+9-LTS

On Wed, Aug 31, 2022 at 12:12 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> Which version of Ignite? Which version of Java?
>
> On 30 Aug 2022, at 13:40, Surinder Mehra  wrote:
>
>
> 
> Hi Stephen ,
>  yes that is implemented correctly and it's running on server nodes as
> well. Somehow it doesn't work when accessed through proxy
>
> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> Your service needs to implement org.apache.ignite.services.Service.
>>
>> > On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
>> >
>> > Hi,
>> > can you help me find out the reason for this exception in thick client
>> while getting instance of ignite service:
>> >
>> > getIgnite()
>> > .services()
>> > .serviceProxy("sampleService", SampleService.class, false)
>> >
>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be
>> cast to class com.test.ignite.stuff.services.SampleServiceImpl
>> (com.sun.proxy.$Proxy148 and
>> com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of
>> loader 'app')
>> >
>> > interface SampleService{
>> >
>> > }
>> >
>> > class SampleServiceImpl implements SampleService{
>> >
>> > }
>> >
>> > ignite config:
>> >
>> > 
>> >   
>> > 
>> >   
>> >   
>> >   
>> >   
>> > > class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>> >   
>> >   
>> > > class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>> >   
>> > 
>> >   
>> > 
>> >
>> >
>> >
>>
>>


Re: ClassCastException while using ignite service proxy

2022-08-30 Thread Stephen Darlington
Which version of Ignite? Which version of Java?

On 30 Aug 2022, at 13:40, Surinder Mehra  wrote:
> 
> 
> Hi Stephen ,
>  yes that is implemented correctly and it's running on server nodes as well. 
> Somehow it doesn't work when accessed through proxy
> 
>> On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington 
>>  wrote:
>> Your service needs to implement org.apache.ignite.services.Service.
>> 
>> > On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
>> > 
>> > Hi,
>> > can you help me find out the reason for this exception in thick client 
>> > while getting instance of ignite service: 
>> > 
>> > getIgnite()
>> > .services()
>> > .serviceProxy("sampleService", SampleService.class, false) 
>> > 
>> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be cast 
>> > to class com.test.ignite.stuff.services.SampleServiceImpl 
>> > (com.sun.proxy.$Proxy148 and 
>> > com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of 
>> > loader 'app')
>> > 
>> > interface SampleService{
>> > 
>> > }
>> > 
>> > class SampleServiceImpl implements SampleService{
>> > 
>> > }
>> > 
>> > ignite config:
>> > 
>> > 
>> >   
>> > 
>> >   
>> >   
>> >   
>> >   
>> > > > class="com.test.ignite.stuff.services.SampleServiceImpl"/>
>> >   
>> >   
>> > > > class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
>> >   
>> > 
>> >   
>> > 
>> > 
>> > 
>> > 
>> 


Re: ClassCastException while using ignite service proxy

2022-08-30 Thread Surinder Mehra
Hi Stephen ,
 yes that is implemented correctly and it's running on server nodes as
well. Somehow it doesn't work when accessed through proxy

On Tue, Aug 30, 2022 at 5:45 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> Your service needs to implement org.apache.ignite.services.Service.
>
> > On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
> >
> > Hi,
> > can you help me find out the reason for this exception in thick client
> while getting instance of ignite service:
> >
> > getIgnite()
> > .services()
> > .serviceProxy("sampleService", SampleService.class, false)
> >
> > java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be
> cast to class com.test.ignite.stuff.services.SampleServiceImpl
> (com.sun.proxy.$Proxy148 and
> com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of
> loader 'app')
> >
> > interface SampleService{
> >
> > }
> >
> > class SampleServiceImpl implements SampleService{
> >
> > }
> >
> > ignite config:
> >
> > 
> >   
> > 
> >   
> >   
> >   
> >   
> >  class="com.test.ignite.stuff.services.SampleServiceImpl"/>
> >   
> >   
> >  class="com.test.ignite.stuff.node.filter.ServerNodeFilter"/>
> >   
> > 
> >   
> > 
> >
> >
> >
>
>


Re: ClassCastException while using ignite service proxy

2022-08-30 Thread Stephen Darlington
Your service needs to implement org.apache.ignite.services.Service.

> On 30 Aug 2022, at 12:40, Surinder Mehra  wrote:
> 
> Hi,
> can you help me find out the reason for this exception in thick client while 
> getting instance of ignite service: 
> 
> getIgnite()
> .services()
> .serviceProxy("sampleService", SampleService.class, false) 
> 
> java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be cast to 
> class com.test.ignite.stuff.services.SampleServiceImpl 
> (com.sun.proxy.$Proxy148 and com.test.ignite.stuff.services.SampleServiceImpl 
> are in unnamed module of loader 'app')
> 
> interface SampleService{
> 
> }
> 
> class SampleServiceImpl implements SampleService{
> 
> }
> 
> ignite config:
> 
> 
>   
> 
>   
>   
>   
>   
> 
>   
>   
> 
>   
> 
>   
> 
> 
> 
> 



ClassCastException while using ignite service proxy

2022-08-30 Thread Surinder Mehra
Hi,
can you help me find out the reason for this exception in thick client
while getting instance of ignite service:

getIgnite()
.services()
.serviceProxy("sampleService", SampleService.class, false)

java.lang.ClassCastException: class com.sun.proxy.$Proxy148 cannot be cast
to class com.test.ignite.stuff.services.SampleServiceImpl
(com.sun.proxy.$Proxy148 and
com.test.ignite.stuff.services.SampleServiceImpl are in unnamed module of
loader 'app')

interface SampleService{

}

class SampleServiceImpl implements SampleService{

}

ignite config:


  

  
  
  
  

  
  

  

  



Re: Issue with using Ignite with Spring Data

2020-08-21 Thread Denis Magda
>
> However as I read them again, I realised that it was anyway necessary to
> load the cache before executing the SELECT sql queries on top of a cache,
> now, would this hold true in the case of Spring Data as well ? (Very likely
> yes, but want to get the confirmation) If so, then are we expected to
> preload the cache on start up and only after that will the read-through
> property kick in to add the entries into the cache for the ones which are
> missing ?


You got it right. If your records are stored in an external database, then
you have to pre-load them in Ignite first before using Ignite SQL. The
latter reads records from disk only if those are stored in Ignite
persistence. Ignite SQL can't query external databases.

Ignite can pre-load a missing record from an external database only if the
record is requested via key-value APIs. With those APIs, it's all trivial -
if Ignite doesn't find the record in its memory storage then it will create
an external-database-specific query to load the record from the external
db. SQL is much more complicated.

-
Denis


On Fri, Aug 21, 2020 at 1:51 PM Srikanta Patanjali 
wrote:

> Hi Denis,
>
> Thanks for taking time to reply and sharing those links. I can confirm to
> you that I've read through them before and have been following them as
> well.
>
> However as I read them again, I realised that it was anyway necessary to
> load the cache before executing the SELECT sql queries on top of a cache,
> now, would this hold true in the case of Spring Data as well ? (Very likely
> yes, but want to get the confirmation) If so, then are we expected to
> preload the cache on start up and only after that will the read-through
> property kick in to add the entries into the cache for the ones which are
> missing ?
>
> If my above understanding is correct then that explains why I was getting
> null results from the queries executed once the Spring boot is instantiated
> as the cache load on startup was not complete yet.
>
>
> Regards,
> Srikanta
>
> On Fri, Aug 21, 2020 at 6:47 PM Denis Magda  wrote:
>
>> Hi Srikanta,
>>
>> You forgot to share the configuration. Anyway, I think it's clear what
>> you are looking for.
>>
>> Check this example showing how to configure CacheJdbcPojoStoreFactory
>> programmatically (click on the "Java" tab, by default the example shows the
>> XML version):
>>
>> https://www.gridgain.com/docs/latest/developers-guide/persistence/external-storage#cachejdbcpojostore
>>
>> Also, if you need to import an existing schema of a relational database
>> and turn it into the CacheJdbcPojoStore config, then this feature of Web
>> Console can be helpful:
>>
>> https://www.gridgain.com/docs/web-console/latest/automatic-rdbms-integration
>>
>> Finally, keep an eye on this Spring Data + Ignite tutorial that covers
>> other areas of the integration. You might have other questions and issue
>> going forward and the tutorial can help to address them quickly:
>> https://www.gridgain.com/docs/tutorials/spring/spring-ignite-tutorial
>>
>> -
>> Denis
>>
>>
>> On Fri, Aug 21, 2020 at 9:09 AM Srikanta Patanjali 
>> wrote:
>>
>>> I'm trying to integrate a Spring data project (without JPA) with Ignite
>>> and struggling to understand some basic traits. Would be very helpful if
>>> you can share some insights on the issue I'm facing.
>>>
>>> Currently the cache has been defined as below with the client node, this
>>> config is not present in the server node, gets created when the client node
>>> joins the cluster. The repositories are detected during the instantiation
>>> of the Spring Boot application.
>>>
>>> All the documentation including the official example repo of the Apache
>>> Ignite does not pass in the data source but instead cachConfig is set with
>>> the IndexedTypes.
>>>
>>> Question: Where should I pass on the DataSource object ? Should I create
>>> a CacheJdbcPojoStoreFactory and pass on the dataSource ?
>>>
>>>
>>> Thanks,
>>> Srikanta
>>>
>>


Re: Issue with using Ignite with Spring Data

2020-08-21 Thread Srikanta Patanjali
Hi Denis,

Thanks for taking time to reply and sharing those links. I can confirm to
you that I've read through them before and have been following them as
well.

However as I read them again, I realised that it was anyway necessary to
load the cache before executing the SELECT sql queries on top of a cache,
now, would this hold true in the case of Spring Data as well ? (Very likely
yes, but want to get the confirmation) If so, then are we expected to
preload the cache on start up and only after that will the read-through
property kick in to add the entries into the cache for the ones which are
missing ?

If my above understanding is correct then that explains why I was getting
null results from the queries executed once the Spring boot is instantiated
as the cache load on startup was not complete yet.


Regards,
Srikanta

On Fri, Aug 21, 2020 at 6:47 PM Denis Magda  wrote:

> Hi Srikanta,
>
> You forgot to share the configuration. Anyway, I think it's clear what you
> are looking for.
>
> Check this example showing how to configure CacheJdbcPojoStoreFactory
> programmatically (click on the "Java" tab, by default the example shows the
> XML version):
>
> https://www.gridgain.com/docs/latest/developers-guide/persistence/external-storage#cachejdbcpojostore
>
> Also, if you need to import an existing schema of a relational database
> and turn it into the CacheJdbcPojoStore config, then this feature of Web
> Console can be helpful:
>
> https://www.gridgain.com/docs/web-console/latest/automatic-rdbms-integration
>
> Finally, keep an eye on this Spring Data + Ignite tutorial that covers
> other areas of the integration. You might have other questions and issue
> going forward and the tutorial can help to address them quickly:
> https://www.gridgain.com/docs/tutorials/spring/spring-ignite-tutorial
>
> -
> Denis
>
>
> On Fri, Aug 21, 2020 at 9:09 AM Srikanta Patanjali 
> wrote:
>
>> I'm trying to integrate a Spring data project (without JPA) with Ignite
>> and struggling to understand some basic traits. Would be very helpful if
>> you can share some insights on the issue I'm facing.
>>
>> Currently the cache has been defined as below with the client node, this
>> config is not present in the server node, gets created when the client node
>> joins the cluster. The repositories are detected during the instantiation
>> of the Spring Boot application.
>>
>> All the documentation including the official example repo of the Apache
>> Ignite does not pass in the data source but instead cachConfig is set with
>> the IndexedTypes.
>>
>> Question: Where should I pass on the DataSource object ? Should I create
>> a CacheJdbcPojoStoreFactory and pass on the dataSource ?
>>
>>
>> Thanks,
>> Srikanta
>>
>


Re: Issue with using Ignite with Spring Data

2020-08-21 Thread Denis Magda
Hi Srikanta,

You forgot to share the configuration. Anyway, I think it's clear what you
are looking for.

Check this example showing how to configure CacheJdbcPojoStoreFactory
programmatically (click on the "Java" tab, by default the example shows the
XML version):
https://www.gridgain.com/docs/latest/developers-guide/persistence/external-storage#cachejdbcpojostore

Also, if you need to import an existing schema of a relational database and
turn it into the CacheJdbcPojoStore config, then this feature of Web
Console can be helpful:
https://www.gridgain.com/docs/web-console/latest/automatic-rdbms-integration

Finally, keep an eye on this Spring Data + Ignite tutorial that covers
other areas of the integration. You might have other questions and issue
going forward and the tutorial can help to address them quickly:
https://www.gridgain.com/docs/tutorials/spring/spring-ignite-tutorial

-
Denis


On Fri, Aug 21, 2020 at 9:09 AM Srikanta Patanjali 
wrote:

> I'm trying to integrate a Spring data project (without JPA) with Ignite
> and struggling to understand some basic traits. Would be very helpful if
> you can share some insights on the issue I'm facing.
>
> Currently the cache has been defined as below with the client node, this
> config is not present in the server node, gets created when the client node
> joins the cluster. The repositories are detected during the instantiation
> of the Spring Boot application.
>
> All the documentation including the official example repo of the Apache
> Ignite does not pass in the data source but instead cachConfig is set with
> the IndexedTypes.
>
> Question: Where should I pass on the DataSource object ? Should I create a
> CacheJdbcPojoStoreFactory and pass on the dataSource ?
>
>
> Thanks,
> Srikanta
>


Issue with using Ignite with Spring Data

2020-08-21 Thread Srikanta Patanjali
I'm trying to integrate a Spring data project (without JPA) with Ignite and
struggling to understand some basic traits. Would be very helpful if you
can share some insights on the issue I'm facing.

Currently the cache has been defined as below with the client node, this
config is not present in the server node, gets created when the client node
joins the cluster. The repositories are detected during the instantiation
of the Spring Boot application.

All the documentation including the official example repo of the Apache
Ignite does not pass in the data source but instead cachConfig is set with
the IndexedTypes.

Question: Where should I pass on the DataSource object ? Should I create a
CacheJdbcPojoStoreFactory and pass on the dataSource ?


Thanks,
Srikanta


Re: Can you create a cache using Ignite Visor CLI?

2020-05-20 Thread akorensh
you can use the sqlline: https://apacheignite-sql.readme.io/docs/sqlline

run some sql: 
  CREATE TABLE Person(ID INTEGER PRIMARY KEY, NAME VARCHAR(100));
  INSERT INTO Person(ID, NAME) VALUES (1, 'Ed'), (2, 'Ann'), (3, 'Emma');

 You will see a cache created: SQL_PUBLIC_PERSON




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can you create a cache using Ignite Visor CLI?

2020-05-20 Thread Andrew Munn
It looks like control.sh won't create caches either.  How can they be
created by a utility in the shell?

On Tue, May 19, 2020 at 3:58 PM akorensh  wrote:

> yes. That is correct:
> https://apacheignite-tools.readme.io/docs/command-line-interface
>
> use the "help cache" command inside the visor to see all of the
> capabilities.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can you create a cache using Ignite Visor CLI?

2020-05-19 Thread akorensh
yes. That is correct:
https://apacheignite-tools.readme.io/docs/command-line-interface

use the "help cache" command inside the visor to see all of the
capabilities.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Can you create a cache using Ignite Visor CLI?

2020-05-19 Thread Andrew Munn
It looks like the Visor CLI can only operate on existing caches, not create
new ones.  Is that correct?


Re: Scheduling Cache Refresh using Ignite

2020-02-14 Thread Andrei Aleksandrov

Hi Nithin,

You face current message because your client lost the connection to the 
server. It tries to get the acknowledge message on some operation (I 
guess it should be some cache operation).


You can see that IgniteClientDisconnectedException was thrown. In this 
case, you can get the reconnect future and wait for the client reconnection:


https://apacheignite.readme.io/docs/clients-vs-servers#reconnecting-a-client

Please add try/catch blocks around your cache operation and add next logic:

|catch (IgniteClientDisconnectedException e) { 
e.reconnectFuture().get(); // Wait for reconnect. // Can proceed and use 
the same IgniteCompute instance. }|



I can't say why your client was disconnected. Highly likely it's because 
of some network issues. You can try to take a look at server logs and 
find there *NODE_LEFT *or *NODE_FAILED *messages.


BR,
Andrei

2/14/2020 8:08 AM, nithin91 пишет:

Hi

I am unable to attach any file as a result of which i pasted the code and
bean file in my previous messages.

Following is error i get.

Feb 13, 2020 11:34:40 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message: null
java.io.IOException: Failed to get acknowledge for message:
TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage

  


[sndNodeId=null, id=b9bb52d3071-613fd9b8-0c00-4dde-ba8f-8f5341734a3c,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
 at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1398)
 at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

Feb 13, 2020 11:34:47 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to reconnect to cluster (consider increasing 'networkTimeout'
configuration property) [networkTimeout=5000]
[11:34:52] Ignite node stopped OK [uptime=00:00:24.772]
Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.IgniteClientDisconnectedException: Failed to execute
dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
 at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:3023)
 at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2992)
 at Load.OrdersLoad.main(OrdersLoad.java:82)
Caused by: class org.apache.ignite.IgniteClientDisconnectedException: Failed
to execute dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:952)
 at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:948)
 ... 4 more
Caused by: class
org.apache.ignite.internal.IgniteClientDisconnectedCheckedException: Failed
to execute dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onDisconnected(GridCacheProcessor.java:1180)
 at
org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:3949)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:821)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:604)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2667)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2705)
 at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
 at java.lang.Thread.run(Thread.java:748)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Can you please attach the full logs with the mentioned exception? BTW I 
don't see any attaches in the previous message (probably user list can't 
do it).


BR,
Andrei

2/13/2020 3:44 PM, nithin91 пишет:

Attached the bean file used



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Following is the java code that loads the cache.

package Load;

import java.sql.Types;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
import org.apache.ignite.cache.store.jdbc.JdbcType;
import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
import org.apache.ignite.cache.store.jdbc.dialect.OracleDialect;
import org.apache.ignite.configuration.CacheConfiguration;
import ignite.example.IgniteUnixImplementation.OrderDetails;
import ignite.example.IgniteUnixImplementation.OrderKey;

public class OrdersLoad {

private static final class CacheJdbcPojoStoreExampleFactory extends
CacheJdbcPojoStoreFactory {
/**
 * 
 */
private static final long serialVersionUID = 1L;

/** {@inheritDoc} */
@Override public CacheJdbcPojoStore create()
{

setDataSourceBean("dataSource");
return super.create();
}
}


private static CacheConfiguration
cacheConfiguration() {
CacheConfiguration cfg = new
CacheConfiguration<>("OrdersCache");

CacheJdbcPojoStoreExampleFactory storefactory =new
CacheJdbcPojoStoreExampleFactory();

storefactory.setDialect(new OracleDialect());

storefactory.setDataSourceBean("dataSource");

JdbcType jdbcType = new JdbcType();

jdbcType.setCacheName("OrdersCache");
jdbcType.setDatabaseSchema("PDS_CACHE");
jdbcType.setDatabaseTable("ORDERS2");

jdbcType.setKeyType("ignite.example.IgniteUnixImplementation.OrderKey");
jdbcType.setKeyFields(new JdbcTypeField(Types.INTEGER, "ORDERID",
Long.class, "OrderID"),
new JdbcTypeField(Types.INTEGER, "CITYID", Long.class, "CityID")


);

   
jdbcType.setValueType("ignite.example.IgniteUnixImplementation.OrderDetails");
jdbcType.setValueFields(
new JdbcTypeField(Types.VARCHAR, "PRODUCTNAME", String.class,
"Productname"),
new JdbcTypeField(Types.VARCHAR, "CUSTOMERNAME", String.class,
"CustomerName"),
new JdbcTypeField(Types.VARCHAR, "STORENAME", String.class,
"StoreName")
);

storefactory.setTypes(jdbcType);

cfg.setCacheStoreFactory(storefactory);

cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

cfg.setReadThrough(true);
cfg.setWriteThrough(true);
cfg.setSqlSchema("PIE");

return cfg;
}

public static void main(String[] args) throws Exception {
try (Ignite ignite = Ignition.start("Ignite-Client.xml")) {

System.out.println(">>> Loading cache OrderDetails");

IgniteCache cache =
ignite.getOrCreateCache(cacheConfiguration());

cache.clear();

ignite.cache("OrdersCache").loadCache(null);

System.out.println(">>> Loaded cache: OrdersCache
Size="+cache.size());

}
}
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Attached the bean file used



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Thanks aealexsandrov. This information is very useful.

Also i have one more query.

Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB to Ignite Cache using Cache JDBC Pojo Store.
   
As a part of this process, bean file is custom configured  to start
ignite
node in unix. Attached the bean file.
This bean file consists of both cache configuration details and
ignite
configuration details.
   
Once the node is running, we are trying to do the following
   
1. Connecting to the ignite node running on unix through eclipse by
creating a replica of
the attached bean file from local system and adding an additional property
in Bean file with
Client Mode = true and
   then loading the cache that are defined in the bean file deployed
in
unix using the
   following method from local system using JAVA
   
ignite.cache("CacheName").loadCache(null);
   
   * We are able to do this successfully.*
   
2.  Connecting to the ignite node running on unix by creating a
replica of
the attached bean file
in local system and adding an additional property in Bean
file with Client
Mode = true
and then trying to create a cache and configure the cache
and then finally
loading
the cache using the attached JAVA Code.
   
   
   * When we are trying this approach, we are getting an error
like dynamic
cache change
is not allowed.Not getting this error when Ignite server
node and client node  is running on local machine.Getting this error when
server node is running in unix and trying to connect to this node from local
system.*
   
It would be really helpful if you can help me in resolving
this issue.
   
If this not the right approach, then
Configuring all the caches in the bean file is the only
available
option?If this is case,
What should be the approach for  building some additional
caches in ignite
and load these Caches using Cache JDBC POJO Store when the node is running.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Please read my comments:

1)Ignite generally doesn't support changing of the cache configuration 
without re-creation of the the cache. But for SQL caches that were 
created via QueryEntity or CREATE TABLE you can add and remove the 
columns using ALTER TABLE commands:


https://apacheignite-sql.readme.io/docs/alter-table
https://apacheignite.readme.io/docs/cache-queries#query-configuration-using-queryentity
https://apacheignite-sql.readme.io/docs/create-table
2)First of all, you can use the following options:

https://apacheignite.readme.io/docs/3rd-party-store#section-read-through-and-write-through

Read through can load the requested keys from DB
Write through will load all the updates to DB.

In case if you require some cache invalidation or refresh then you can 
create some cron job for it.


3)I guess that loadCache is the only to do it. It will filter the values 
that have already existed in the cache.


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#loadCache-org.apache.ignite.lang.IgniteBiPredicate-java.lang.Object...-

4)You can use a different subset of integrations that can do distributed 
streaming to Ignite like Spark or Kafka:


https://apacheignite-mix.readme.io/docs/getting-started

BR,
Andrei
2/12/2020 9:11 PM, nithin91 пишет:

Hi

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB
to Ignite Cache using Cache JDBC Pojo Store.

Can someone help me whether the following scenarios can be handled using
Ignite as i couldn't find this in the official documentation.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.

2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can
share sample code or
   documentation link.


4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Scheduling Cache Refresh using Ignite

2020-02-12 Thread nithin91
Hi 

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on 
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB 
to Ignite Cache using Cache JDBC Pojo Store.

Can someone help me whether the following scenarios can be handled using
Ignite as i couldn't find this in the official documentation.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly 
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.
   
2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can 
share sample code or 
   documentation link.
   

4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread Pavel Tupitsyn
> UnknownHostException: kubernetes.default.svc.cluster.local

Looks like you are not running within Kubernetes. This address is available
from within Pods.
https://github.com/kubernetes/dns/blob/master/docs/specification.md


On Mon, Dec 16, 2019 at 2:30 PM camer314 
wrote:

> I fixed this by following
>
> http://apache-ignite-users.70518.x6.nabble.com/ignite-kubernetes-seems-to-be-missing-the-jackson-annotations-dependency-td25670.html
>
> (Copying a missing JAR)
>
> This gets past that error and now I am left with the following but thats
> probably more environmental...
>
> class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
> pods IP addresses.
> at
>
> org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
> at
>
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
> at
>
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
> at
>
> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:43)
> at
>
> org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:75)
> Caused by: java.net.UnknownHostException:
> kubernetes.default.svc.cluster.local
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread camer314
I fixed this by following
http://apache-ignite-users.70518.x6.nabble.com/ignite-kubernetes-seems-to-be-missing-the-jackson-annotations-dependency-td25670.html

(Copying a missing JAR)

This gets past that error and now I am left with the following but thats
probably more environmental...

class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at
org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:43)
at
org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:75)
Caused by: java.net.UnknownHostException:
kubernetes.default.svc.cluster.local



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread Pavel Tupitsyn
> Seems like the nuget package has different versions bundled together.

Oh, you are right. Looks like an issue with the release build on CI, I'll
double check.
Thanks for pointing this out.

On Mon, Dec 16, 2019 at 2:05 PM camer314 
wrote:

> Ah ok, let me try that.
>
> I was using your Docker project
> https://github.com/ptupitsyn/ignite-net-docker
>   . Seems like the nuget
> package has different versions bundled together.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread camer314
Pruning the list down to the bin distribution gets past that error but now I
am getting:

Unhandled Exception: Apache.Ignite.Core.Common.IgniteException: Java class
is not found (did you set IGNITE_HOME environment variable?):
com/fasterxml/jackson/annotation/JsonView --->
Apache.Ignite.Core.Common.JavaException: java.lang.NoClassDefFoundError:
com/fasterxml/jackson/annotation/JsonView
at
com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:37)
at
com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:151)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at
org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:43)
at
org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:75)
Caused by: java.lang.ClassNotFoundException:
com.fasterxml.jackson.annotation.JsonView
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 19 more

   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck()
   at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.IgnitionStart(Env
env, String cfgPath, String gridName, Boolean clientMode, Boolean
userLogger, Int64 igniteId, Boolean redirectConsole)
   at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
   --- End of inner exception stack trace ---
   at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
   at Apache.Ignite.Docker.Program.Main() in /app/Program.cs:line 41




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread camer314
Ah ok, let me try that.

I was using your Docker project 
https://github.com/ptupitsyn/ignite-net-docker
  . Seems like the nuget
package has different versions bundled together.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread Pavel Tupitsyn
You seem to have a mix up of different Spring versions:
- /app/libs/spring-core-5.0.8.RELEASE.jar
- /app/libs/spring-core-4.3.18.RELEASE.jar

Please make sure to download Ignite 2.7.6 binary distro and only use jar
files from there. Here is the list I have in `libs` folder:
annotations-13.0.jar
cache-api-1.0.0.jar
commons-logging-1.1.1.jar
ignite-core-2.7.6.jar
ignite-kubernetes-2.7.6.jar
ignite-shmem-1.0.0.jar
ignite-spring-2.7.6.jar
jackson-core-2.9.6.jar
jackson-databind-2.9.6.jar
spring-aop-4.3.18.RELEASE.jar
spring-beans-4.3.18.RELEASE.jar
spring-context-4.3.18.RELEASE.jar
spring-core-4.3.18.RELEASE.jar
spring-expression-4.3.18.RELEASE.jar
spring-jdbc-4.3.18.RELEASE.jar
spring-tx-4.3.18.RELEASE.jar


On Mon, Dec 16, 2019 at 1:43 PM camer314 
wrote:

>  No problem,
>
> This is everything that is logged:
>
> at
>
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at
>
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869)
> at
>
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
> at
>
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
> ... 6 more
> Caused by: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name
> 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#eb21112' defined in
> URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Cannot create inner bean
>
> 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
> of type
>
> [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]
> while setting bean property 'ipFinder'; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean
> with name
>
> 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
> defined in URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Initialization of bean failed; nested exception is
> java.lang.NullPointerException
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1537)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1284)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
> ... 19 more
> Caused by: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name
>
> 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
> defined in URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Initialization of bean failed; nested exception is
> java.lang.NullPointerException
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:564)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
> ... 25 more
> Caused by: java.lang.NullPointerException
> at
>
> org.springframework.core.BridgeMethodResolver.findBridgedMethod(BridgeMethodResolver.java:60)
> at
>
> org.springframework.beans.GenericTypeAwarePropertyDescriptor.(GenericTypeAwarePropertyDescriptor.java:70)
> at
>
> org.springframework.beans.CachedIntrospectionResults.buildGenericTypeAwarePropertyDescriptor(CachedIntrospectionResults.java:366)
> at
>
> org.springframework.beans.CachedIntrospectionResults.(CachedIntrospectionResults.java:302)
> at
>
> org.springframework.beans.CachedIntrospectionResults.forClass(CachedIntrospectionResults.java:189)
> at
>
> 

Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread camer314
 No problem,

This is everything that is logged:

at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
... 6 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#eb21112' defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Cannot create inner bean
'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
of type
[org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]
while setting bean property 'ipFinder'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name
'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Initialization of bean failed; nested exception is
java.lang.NullPointerException
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1537)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1284)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 19 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1786f9d5'
defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Initialization of bean failed; nested exception is
java.lang.NullPointerException
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:564)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 25 more
Caused by: java.lang.NullPointerException
at
org.springframework.core.BridgeMethodResolver.findBridgedMethod(BridgeMethodResolver.java:60)
at
org.springframework.beans.GenericTypeAwarePropertyDescriptor.(GenericTypeAwarePropertyDescriptor.java:70)
at
org.springframework.beans.CachedIntrospectionResults.buildGenericTypeAwarePropertyDescriptor(CachedIntrospectionResults.java:366)
at
org.springframework.beans.CachedIntrospectionResults.(CachedIntrospectionResults.java:302)
at
org.springframework.beans.CachedIntrospectionResults.forClass(CachedIntrospectionResults.java:189)
at
org.springframework.beans.BeanWrapperImpl.getCachedIntrospectionResults(BeanWrapperImpl.java:173)
at
org.springframework.beans.BeanWrapperImpl.getLocalPropertyHandler(BeanWrapperImpl.java:226)
at
org.springframework.beans.BeanWrapperImpl.getLocalPropertyHandler(BeanWrapperImpl.java:63)
at
org.springframework.beans.AbstractNestablePropertyAccessor.getPropertyHandler(AbstractNestablePropertyAccessor.java:737)
at
org.springframework.beans.AbstractNestablePropertyAccessor.isWritableProperty(AbstractNestablePropertyAccessor.java:569)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1539)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1284)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
... 27 more
   at 

Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-16 Thread Pavel Tupitsyn
I tried your scenario, and after adding jars from
optional/ignite-kubernetes folder of binary distribution, it worked for me.

Can you please send full exception details (entire stack trace)?

On Mon, Dec 16, 2019 at 9:59 AM camer314 
wrote:

> I figured that I needed the optional kubernetes JARS which are part of the
> binary distribution but not he Nuget package, so I added the
> ignite-kubernetes libs to my docker image as well, now I get a different
> error of null pointer:
>
> Caused by: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name
> 'org.apache.ignite.configuration.IgniteConfiguration#0' defined in URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Cannot create inner bean
> 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#6eceb130' of type
> [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] while setting bean
> property 'discoverySpi'; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean
> with name 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#6eceb130'
> defined in URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Cannot create inner bean
>
> 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#2eda0940'
> of type
>
> [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]
> while setting bean property 'ipFinder'; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean
> with name
>
> 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#2eda0940'
> defined in URL
> [https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
> Initialization of bean failed; nested exception is
> java.lang.NullPointerException
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
>
> My libs folder looks like this:
>
> /app/libs/spring-data-commons-2.0.9.RELEASE.jar
> /app/libs/commons-io-2.6.jar
> /app/libs/spring-core-5.0.8.RELEASE.jar
> /app/libs/spring-core-4.3.18.RELEASE.jar
> /app/libs/ignite-spring-data_2.0-2.7.6.jar
> /app/libs/commons-rng-core-1.0.jar
> /app/libs/ignite-shmem-1.0.0.jar
> /app/libs/cache-api-1.0.0.jar
> /app/libs/commons-logging-1.1.1.jar
> /app/libs/commons-collections-3.2.2.jar
> /app/libs/spring-expression-4.3.18.RELEASE.jar
> /app/libs/commons-math3-3.6.1.jar
> /app/libs/spring-aop-4.3.18.RELEASE.jar
> /app/libs/spring-tx-4.3.18.RELEASE.jar
> /app/libs/lucene-core-7.4.0.jar
> /app/libs/spring-tx-5.0.8.RELEASE.jar
> /app/libs/ignite-core-2.7.6.jar
> /app/libs/ignite-indexing-2.7.6.jar
> /app/libs/spring-beans-4.3.18.RELEASE.jar
> /app/libs/commons-beanutils-1.9.3.jar
> /app/libs/ignite-spring-data-2.7.6.jar
> /app/libs/commons-logging-1.2.jar
> /app/libs/h2-1.4.197.jar
> /app/libs/commons-codec-1.11.jar
> /app/libs/spring-beans-5.0.8.RELEASE.jar
> /app/libs/spring-data-commons-1.13.14.RELEASE.jar
> /app/libs/spring-context-4.3.18.RELEASE.jar
> /app/libs/spring-jdbc-4.3.18.RELEASE.jar
> /app/libs/lucene-queryparser-7.4.0.jar
> /app/libs/spring-context-5.0.8.RELEASE.jar
> /app/libs/commons-lang-2.6.jar
> /app/libs/commons-beanutils-1.9.2.jar
> /app/libs/commons-rng-simple-1.0.jar
> /app/libs/ignite-spring-2.7.6.jar
> /app/libs/lucene-analyzers-common-7.4.0.jar
> /app/libs/ignite-kubernetes-2.7.6.jar
> /app/libs/README.txt
> /app/libs/jackson-databind-2.9.6.jar
> /app/libs/jackson-core-2.9.6.jar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-15 Thread camer314
I figured that I needed the optional kubernetes JARS which are part of the
binary distribution but not he Nuget package, so I added the
ignite-kubernetes libs to my docker image as well, now I get a different
error of null pointer:

Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.configuration.IgniteConfiguration#0' defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Cannot create inner bean
'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#6eceb130' of type
[org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] while setting bean
property 'discoverySpi'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#6eceb130'
defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Cannot create inner bean
'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#2eda0940'
of type
[org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder]
while setting bean property 'ipFinder'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name
'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#2eda0940'
defined in URL
[https://wtwdeeplearning.blob.core.windows.net/ignite/spring_config.xml]:
Initialization of bean failed; nested exception is
java.lang.NullPointerException
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)

My libs folder looks like this:

/app/libs/spring-data-commons-2.0.9.RELEASE.jar
/app/libs/commons-io-2.6.jar
/app/libs/spring-core-5.0.8.RELEASE.jar
/app/libs/spring-core-4.3.18.RELEASE.jar
/app/libs/ignite-spring-data_2.0-2.7.6.jar
/app/libs/commons-rng-core-1.0.jar
/app/libs/ignite-shmem-1.0.0.jar
/app/libs/cache-api-1.0.0.jar
/app/libs/commons-logging-1.1.1.jar
/app/libs/commons-collections-3.2.2.jar
/app/libs/spring-expression-4.3.18.RELEASE.jar
/app/libs/commons-math3-3.6.1.jar
/app/libs/spring-aop-4.3.18.RELEASE.jar
/app/libs/spring-tx-4.3.18.RELEASE.jar
/app/libs/lucene-core-7.4.0.jar
/app/libs/spring-tx-5.0.8.RELEASE.jar
/app/libs/ignite-core-2.7.6.jar
/app/libs/ignite-indexing-2.7.6.jar
/app/libs/spring-beans-4.3.18.RELEASE.jar
/app/libs/commons-beanutils-1.9.3.jar
/app/libs/ignite-spring-data-2.7.6.jar
/app/libs/commons-logging-1.2.jar
/app/libs/h2-1.4.197.jar
/app/libs/commons-codec-1.11.jar
/app/libs/spring-beans-5.0.8.RELEASE.jar
/app/libs/spring-data-commons-1.13.14.RELEASE.jar
/app/libs/spring-context-4.3.18.RELEASE.jar
/app/libs/spring-jdbc-4.3.18.RELEASE.jar
/app/libs/lucene-queryparser-7.4.0.jar
/app/libs/spring-context-5.0.8.RELEASE.jar
/app/libs/commons-lang-2.6.jar
/app/libs/commons-beanutils-1.9.2.jar
/app/libs/commons-rng-simple-1.0.jar
/app/libs/ignite-spring-2.7.6.jar
/app/libs/lucene-analyzers-common-7.4.0.jar
/app/libs/ignite-kubernetes-2.7.6.jar
/app/libs/README.txt
/app/libs/jackson-databind-2.9.6.jar
/app/libs/jackson-core-2.9.6.jar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Getting Spring XML exception when using Ignite .NET with Kubernetes

2019-12-15 Thread camer314
Hello,

I have built a docker image of my Ignite .NET application and am
instantiating this as a pod in my Kubernetes deployment. This all works as
expected and Ignite initializes fine.

However, I want my nodes to be discoverable so I am attempting to use the
Spring XML file located  here

  

I have also constructed my .NET code to include this in the configuration:

/Console.WriteLine($"Starting Ignite.NET...");
var config_url =
Environment.GetEnvironmentVariable("CONFIG_URI");

if(String.IsNullOrEmpty(config_url))
{
config_url =
"https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence.xml;;
}

Console.WriteLine($"Using config from [{config_url}]");

var config = new IgniteConfiguration()
{
SpringConfigUrl = config_url,
JvmOptions = new List()
{
"-DIGNITE_QUIET=false"
}
};

Ignition.Start(config);
/

Also, I have the binary distribution of Ignite 2.7.6 in the /libs
subdirectory of my app folder.

When this runs I get the following error but there appears to be no
descriptive message as to whats causing it:

Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.configuration.DataStorageConfiguration#4524411f' defined
in URL
[https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence.xml]:
Cannot create inner bean
'org.apache.ignite.configuration.DataRegionConfiguration#544a2ea6' of type
[org.apache.ignite.configuration.DataRegionConfiguration] while setting bean
property 'defaultDataRegionConfiguration'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.configuration.DataRegionConfiguration#544a2ea6'
defined in URL
[https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence.xml]:
Initialization of bean failed; nested exception is
java.lang.NullPointerException
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1537)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1284)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Complex Event Processing Using Ignite Streaming

2019-09-17 Thread Stephen Darlington
Unfortunately CEP isn’t really a focus of the project. Having said that, it is 
possible. For example, the old documentation for sliding windows still worked 
the last time I checked:

https://apacheignite.readme.io/v1.4/docs/sliding-windows 
<https://apacheignite.readme.io/v1.4/docs/sliding-windows>

And, of course, you can using things like Continuous Queries to listen to 
changes in caches.

The difference between Ignite and Spark. Flink, etc. is mostly down to 
programming model and storage. Neither Spark nor Flink have any persistence but 
Ignite does (both in memory and on disk). All three have different programming 
models; which is best depends on your use case. Many people choose Ignite as it 
does “everything.” Others pick a combination, taking advantage of the 
specialisation in some tools at the expense of more integration work.

Regards,
Stephen

> On 17 Sep 2019, at 09:24, Ignite Enthusiast  wrote:
> 
> Anyone  ??
> 
> On Wednesday, September 11, 2019, 3:29:32 PM GMT+5:30, Ignite Enthusiast 
>  wrote:
> 
> 
> I am trying to build a CEP where a Complex event needs to be generated on a 
> set of input events (3 Chassis hot events) over a specified time window (10 
> seconds, for eg) and I am trying to evaluate Apache Ignite for this.
> 
> Are there any examples of how to do Complex Event Processing using Ignite? 
> The following wiki page hardly seems to help.
> 
> https://apacheignite.readme.io/docs/streaming--cep 
> <https://apacheignite.readme.io/docs/streaming--cep>
> 
> 
> Also, how does Ignite CEP implementation compare with others ? (like Apache 
> Spark, Apache Flink, etc)




Re: Complex Event Processing Using Ignite Streaming

2019-09-17 Thread Ignite Enthusiast
 Anyone  ??

On Wednesday, September 11, 2019, 3:29:32 PM GMT+5:30, Ignite Enthusiast 
 wrote:  
 
 I am trying to build a CEP where a Complex event needs to be generated on a 
set of input events (3 Chassis hot events) over a specified time window (10 
seconds, for eg) and I am trying to evaluate Apache Ignite for this.
Are there any examples of how to do Complex Event Processing using Ignite?  The 
following wiki page hardly seems to help.
https://apacheignite.readme.io/docs/streaming--cep

Also, how does Ignite CEP implementation compare with others ? (like Apache 
Spark, Apache Flink, etc)
  

Complex Event Processing Using Ignite Streaming

2019-09-11 Thread Ignite Enthusiast
I am trying to build a CEP where a Complex event needs to be generated on a set 
of input events (3 Chassis hot events) over a specified time window (10 
seconds, for eg) and I am trying to evaluate Apache Ignite for this.
Are there any examples of how to do Complex Event Processing using Ignite?  The 
following wiki page hardly seems to help.
https://apacheignite.readme.io/docs/streaming--cep

Also, how does Ignite CEP implementation compare with others ? (like Apache 
Spark, Apache Flink, etc)


Re: Using Ignite Native Persistence as a "temporary durable" cache

2019-09-06 Thread Alexander Korenshteyn
origonal question:
ferhadcebi...@gmail.com
5:00 AM (5 hours ago)
 to me


 Thanks for answer. But, how Ignite will store that datas to store? Will
append to the end of WAL? If, then it is sure faster than storing cache
operations in some 3rd part database.


Ignite does store data to WAL first, but per my benchmarks, storing 100,000
records took a few seconds, and deleting took a few seconds more.

 You can experiment using :

https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/persistentstore/PersistentStoreExample.java


Here is sample code: (modified section in above example)
[image: image.png]

On Thu, Sep 5, 2019 at 12:17 PM Alexander Korenshteyn <
alexanderko...@gmail.com> wrote:

> Hello,
>Ignite native persistence has a good track record, is fast and
> reliable, you can use it in your application.
>
>Take a look at the following example of how to use a streamer to
> quickly insert data:
> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/persistentstore/PersistentStoreExample.java
>
>Use cache.removeAll() -- to quickly remove all entries once you are
> done.
> Thanks, Alex
>
> On Thu, Sep 5, 2019 at 10:23 AM Farhad Jabiyev 
> wrote:
>
>> Hi all,
>>
>> We have MS SQL database server which contains all data. Our application
>> will fetch some datas from database server and put them to the cache. And
>> then, during 5-10 seconds we will do some updates to that objects and push
>> that changes to ignite in-memory cache. And then, after 5-10 seconds we
>> will take that changes and sync them will database as a bulk operation and
>> then will clear that permanent storage. So, we need some permanent storage
>> to store those updates for 5-10 seconds.
>> And actually, we will work at most with 100.000 entity.
>>
>> The idea behind that flow is that we can't now scale DB and users already
>> putting load to the database.
>>
>> We can't decide whether we have to use Native Persistent or some another
>> 3rd party database like Maria or PostgreSQL for storing that cache
>> operations for 5-10 seconds.
>>
>> Will Ignite works fast if we will use native persistent and clear the
>> cache periodically?
>>
>


Re: Using Ignite Native Persistence as a "temporary durable" cache

2019-09-05 Thread Alexander Korenshteyn
Hello,
   Ignite native persistence has a good track record, is fast and reliable,
you can use it in your application.

   Take a look at the following example of how to use a streamer to quickly
insert data:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/persistentstore/PersistentStoreExample.java

   Use cache.removeAll() -- to quickly remove all entries once you are done.
Thanks, Alex

On Thu, Sep 5, 2019 at 10:23 AM Farhad Jabiyev 
wrote:

> Hi all,
>
> We have MS SQL database server which contains all data. Our application
> will fetch some datas from database server and put them to the cache. And
> then, during 5-10 seconds we will do some updates to that objects and push
> that changes to ignite in-memory cache. And then, after 5-10 seconds we
> will take that changes and sync them will database as a bulk operation and
> then will clear that permanent storage. So, we need some permanent storage
> to store those updates for 5-10 seconds.
> And actually, we will work at most with 100.000 entity.
>
> The idea behind that flow is that we can't now scale DB and users already
> putting load to the database.
>
> We can't decide whether we have to use Native Persistent or some another
> 3rd party database like Maria or PostgreSQL for storing that cache
> operations for 5-10 seconds.
>
> Will Ignite works fast if we will use native persistent and clear the
> cache periodically?
>


Using Ignite Native Persistence as a "temporary durable" cache

2019-09-05 Thread Farhad Jabiyev
Hi all,

We have MS SQL database server which contains all data. Our application
will fetch some datas from database server and put them to the cache. And
then, during 5-10 seconds we will do some updates to that objects and push
that changes to ignite in-memory cache. And then, after 5-10 seconds we
will take that changes and sync them will database as a bulk operation and
then will clear that permanent storage. So, we need some permanent storage
to store those updates for 5-10 seconds.
And actually, we will work at most with 100.000 entity.

The idea behind that flow is that we can't now scale DB and users already
putting load to the database.

We can't decide whether we have to use Native Persistent or some another
3rd party database like Maria or PostgreSQL for storing that cache
operations for 5-10 seconds.

Will Ignite works fast if we will use native persistent and clear the cache
periodically?


Re: Using Ignite as blob store?

2019-08-23 Thread colinc
>From anecdotal experience of storing larger objects (up to say 10MB) in
Ignite, I find that the overall access performance is significantly better
than storing lots of small objects. The main thing to watch out for is that
very large objects can cause unbalanced data distribution. Similar to
over-use of affinity.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using Ignite as blob store?

2019-08-23 Thread Pavel Kovalenko
Denis,

You can't set page size greater than 16Kb due to our page memory
limitations.

чт, 22 авг. 2019 г. в 22:34, Denis Magda :

> How about setting page size to more KBs or MBs based on the average value?
> That should work perfectly fine.
>
> -
> Denis
>
>
> On Thu, Aug 22, 2019 at 8:11 AM Shane Duan  wrote:
>
>> Thanks, Ilya. The blob size varies from a few KBs to a few MBs.
>>
>> Cheers,
>> Shane
>>
>>
>> On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> How large are these blobs? Ignite is going to divide blobs into <4k
>>> chunks. We have no special optimizations for storing large key-value pairs.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>>>
 Hi Igniters, is it a good idea to use Ignite(with persistence) as a
 blob store? I did run some testing with a small dataset, and it looks
 performing okay, even with a small off-heap mem for the data region.

 Thanks!

 Shane

>>>


Re: Using Ignite as blob store?

2019-08-23 Thread colinc
I understand from this post:
https://stackoverflow.com/questions/50116444/unable-to-increase-pagesize/50121410#50121410

that the maximum page size is 16K. Is that still true?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using Ignite as blob store?

2019-08-22 Thread Denis Magda
How about setting page size to more KBs or MBs based on the average value?
That should work perfectly fine.

-
Denis


On Thu, Aug 22, 2019 at 8:11 AM Shane Duan  wrote:

> Thanks, Ilya. The blob size varies from a few KBs to a few MBs.
>
> Cheers,
> Shane
>
>
> On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> How large are these blobs? Ignite is going to divide blobs into <4k
>> chunks. We have no special optimizations for storing large key-value pairs.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>>
>>> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
>>> store? I did run some testing with a small dataset, and it looks performing
>>> okay, even with a small off-heap mem for the data region.
>>>
>>> Thanks!
>>>
>>> Shane
>>>
>>


Re: Using Ignite as blob store?

2019-08-22 Thread Shane Duan
Thanks, Ilya. The blob size varies from a few KBs to a few MBs.

Cheers,
Shane


On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> How large are these blobs? Ignite is going to divide blobs into <4k
> chunks. We have no special optimizations for storing large key-value pairs.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>
>> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
>> store? I did run some testing with a small dataset, and it looks performing
>> okay, even with a small off-heap mem for the data region.
>>
>> Thanks!
>>
>> Shane
>>
>


Re: Using Ignite as blob store?

2019-08-22 Thread Ilya Kasnacheev
Hello!

How large are these blobs? Ignite is going to divide blobs into <4k chunks.
We have no special optimizations for storing large key-value pairs.

Regards,
-- 
Ilya Kasnacheev


чт, 22 авг. 2019 г. в 02:53, Shane Duan :

> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
> store? I did run some testing with a small dataset, and it looks performing
> okay, even with a small off-heap mem for the data region.
>
> Thanks!
>
> Shane
>


Using Ignite as blob store?

2019-08-21 Thread Shane Duan
Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
store? I did run some testing with a small dataset, and it looks performing
okay, even with a small off-heap mem for the data region.

Thanks!

Shane


Re: Using Ignite as a IMDG Write Through Cache in Azure POC

2019-04-12 Thread Denis Magda
Hello,

Feel free to use Ignite .NET for that. Moreover, you have 2 options here:

   1. Use .NET standard client (supports most of the APIs but connects to
   the cluster via a JVM process started internally). Here is how you can
   define its config for entries eviction:
   https://apacheignite-net.readme.io/docs/eviction-policies
   2. Use .NET thin client (lightweight, connects via a proxy server - not
   that fast but will be addressed in next releases and doesn't start the
   JVM): https://apacheignite-net.readme.io/docs/thin-client

Btw, keep in mind that SQL engine requires all the data to be in Ignite
cluster - it won't go to SQL Server if anything is missing in RAM. Our SQL
engine can go to disk only if native persistence is enabled.
-
Denis


On Tue, Apr 9, 2019 at 6:57 AM Asadikhan  wrote:

> I want to setup a basic Ignite cluster to start playing around with. What I
> am trying to achieve is setup 3 VMs in Azure that will make up the Ignite
> Cluster. On the back-end I want to use either SQL (for my POC) which I will
> later replace with Cassandra (for POC again - I think SQL would be easier
> to
> start with given my knowledge).
>
> The key thing I want to achieve is use Ignite as a write through cache. So
> if I ingest 1TB of data, I want to retain say most recent 100GB of that in
> Ignite while the rest of the it passes through to the backing SQL Server.
>
> I am more familiar with .net but I can use Java too if needed. Should I use
> Apache Ignite.net for this or Apache Ignite. Also, can someone either point
> me to resources/documentation or give me a high level breakdown of what I
> need to do to achieve this?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to use enums in sql ‘where’ clause when using Ignite Web Console

2019-03-15 Thread aealexsandrov
Hi,

Could you please share your CacheConfiguration where you set the EnumField
as a field in QueryEntity (just to have the same configuration)? I will try
to reproduce your issue and check how it works.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to use enums in sql ‘where’ clause when using Ignite Web Console

2019-03-14 Thread Stanislav Bausov
I see there was closed issue:
https://issues.apache.org/jira/browse/IGNITE-3595 but my ignite web console
still shows enums like ‘com.example.MyEnum’, and when I try to query by it
get:

Error: Hexadecimal string contains non-hex character: “ENUM_VALUE"; SQL
statement: select _key, * from MyModel where enum_col = 'ENUM_VALUE';
[90004-197]


Re: Some problems when using Ignite

2018-08-14 Thread Ilya Kasnacheev
Hello!

BinaryObject should be solving your issues. Can you please show how the
problem manifets if you don't restart cluster?

Regards,

-- 
Ilya Kasnacheev

2018-08-14 13:54 GMT+03:00 zym周煜敏 :

> Hi,
>
>
>
> Our company is now on a trial survey on Ignite. We found that if we want
> to alter the table schema or add a new table or even add an index when we
> use the Ignite SQL, we should change our code of the data injection,
> compile again, and then restart our ignite cluster to validate all the
> changes. We have tried BinaryObject, but it still needs pre-setting of the
> table schema and cluster restart. Are there any workarounds since it’s
> expensive to restart an industial cluster which may cause data loss? If
> not, will the latter version of Ignite support the dynamic changing of the
> table schema?
>
>
>
> Best Regards,
>
> Brian Zhou
>
> [image: new logo]
>
>
>


Re: Some problems when using Ignite

2018-08-14 Thread dkarachentsev
Hi,

Dynamic schema chages is available only via SQL/JDBC [1]. 
BTW caches created via SQL could be accessed from java API if you add
SQL_PUBLIC_ to table. For example: ignite.cache(SQL_PUBLIC_TABLENAME).

[1] https://apacheignite-sql.readme.io/docs/ddl

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Some problems when using Ignite

2018-08-14 Thread zym周煜敏
Hi,

Our company is now on a trial survey on Ignite. We found that if we want to 
alter the table schema or add a new table or even add an index when we use the 
Ignite SQL, we should change our code of the data injection, compile again, and 
then restart our ignite cluster to validate all the changes. We have tried 
BinaryObject, but it still needs pre-setting of the table schema and cluster 
restart. Are there any workarounds since it’s expensive to restart an industial 
cluster which may cause data loss? If not, will the latter version of Ignite 
support the dynamic changing of the table schema?

Best Regards,
Brian Zhou
[new logo]



Using Ignite grid during rebalancing operations

2018-04-02 Thread Raymond Wilson
I’ve been reading how Ignite manages rebalancing as a result of topology
changes here: https://apacheignite.readme.io/docs/rebalancing



It does not say so explicitly, but reading between the lines suggests that
the Ignite grid will respond to both read and write activity while
rebalancing is in progress when using ASYNC mode. Is this correct?



If another node is added to the grid midway through the grid rebalancing
from a previous addition of a node does grid rebalancing reorient to
handling the two new nodes, or does it rebalance for the first node
addition then rebalance again for the second new node addition?



Thanks,

Raymond.


Re: Error in executing hadoop job using ignite

2018-02-26 Thread bbkstha
Hi,

Were you able to resolve this issue? If yes, can you share your solution?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error serialising arrays using Ignite 2.2 C# client

2017-11-13 Thread Raymond Wilson
Thanks Alexey.

-Original Message-
From: Alexey Popov [mailto:tank2.a...@gmail.com]
Sent: Tuesday, November 14, 2017 5:09 AM
To: user@ignite.apache.org
Subject: Re: Error serialising arrays using Ignite 2.2 C# client

Hi Raymond,

You are right. True multidimensional arrays are not supported now in
binary serialized (C#).
Jugged arrays work fine. So, you can use them or just one-dimensional
array with 2D-index calculation.

Anyway, I opened a ticket:
https://issues.apache.org/jira/browse/IGNITE-6896
You can track a progress on this issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error serialising arrays using Ignite 2.2 C# client

2017-11-13 Thread Alexey Popov
Hi Raymond,

You are right. True multidimensional arrays are not supported now in binary
serialized (C#). 
Jugged arrays work fine. So, you can use them or just one-dimensional array
with 2D-index calculation.

Anyway, I opened a ticket: https://issues.apache.org/jira/browse/IGNITE-6896
You can track a progress on this issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Error serialising arrays using Ignite 2.2 C# client

2017-11-10 Thread Raymond Wilson
I’m using the Ignite 2.2 C# client to make ComputeTask calls.



I have a type in an argument to one of those calls that is a two
dimensional array of floats, like this:



[Serializable]

Public float[,] = new float[32, 32];



When this field is present in a structure being serialized (either in the
argument, or in the response), I get the following error (when the field is
in the result):



Call completed successfully, but result serialization failed […,
serializationErrMsg=Expression of type 'System.Single[,]' cannot be used
for parameter of type 'System.Single[]']

This is odd as the serializer seems to be getting one dimensional and two
dimensional arrays confused.



If I change the element being serialized to a single dimensional array (as
below), then the serialization is fine.



[Serializable]

Public float[] = new float[1024];



Are multi-dimensional arrays expected to be supported? Note: This is a
rectangular array, not a jagged array (which would be possible if the
definition was float[][]), which I expect would give the serialiser more
trouble.



Thanks,

Raymond.


Re: Using ignite with spark

2017-09-22 Thread Patrick Brunmayr
Hello Val

First of all thx for this answer. Let me explain our use case.

*What we are doing*

Our company is a monitoring solution for machines in the manufacturing
industry. We have a hardware logger attached to each machine wich collects
up to 6 different metrics (  like power, piece count ). These metrics are
sampled on a per second basis and sent to our cloud every minute. Data is
currently stored in a cassandra cluster.

*For the math of that *

One metric will generate about 33 million data points per year meaning all
six metrics will cause a total of 100 million data points per machine /
year. Lets say we have about 2000 machines out there its very obvious that
we are talking about terra bytes of metric data.

*The goal*

We need to do some analytics on this data to provide reports for our
customers. Therefore we need to do all kind of transformations, filtering
and joining on that data. We also need support for secondary indexes and
grouping! This was the reason we chose spark for this kind of job. We want
to speed up the spark calculations with Ignite to provide a better
experience for our customers.

My idea was to use Ignite as a read through cache to our cassandra cluster
and combining this with Spark SQL. The data for the calculation should only
stay in the cache during the calculations and can easily be discared
afterwards.


Now i need some information how to setup my cluster correctly for that use
case. I don't know how many nodes i need and how much GB of RAM and if i
should put my ignite nodes on the spark workers or create a separate
cluster. I need this information for cost estimates.

Hope that helps a bit

Thx










2017-09-22 5:12 GMT+02:00 Valentin Kulichenko :

> Hello Patrick,
>
> See my comments below.
>
> Most of your questions don't have a generic answer and would heavily
> depend on your use case. Would you mind giving some more details about it
> so that I can give more specific suggestions?
>
> -Val
>
> On Thu, Sep 21, 2017 at 8:24 AM, Patrick Brunmayr <
> patrick.brunm...@kpibench.com> wrote:
>
>> Hello
>>
>>
>>- What is currently the best practice of deploying Ignite with Spark ?
>>
>>
>>- Should the Ignite node sit on the same machine as the Spark
>>executor ?
>>
>>
> Ignite can run either on same boxes where Spark runs, or as a separate
> cluster, and both approaches have their pros and cons.
>
>
>> According to this documentation
>>  Spark
>> should be given 75% of machine memory but what is left for Ignite then ?
>>
>> In general, Spark can run well with anywhere from *8 GB to hundreds of
>>> gigabytes* of memory per machine. In all cases, we recommend allocating
>>> only at most 75% of the memory for Spark; leave the rest for the operating
>>> system and buffer cache.
>>
>>
> Documentation states that you should give *at most* 75% to make sure OS
> has a safe cushion for its own purposes. If Ignite runs along with Spark,
> amount of memory allocated to Spark should be less then that maximum of
> course.
>
>
>>
>>- Don't they battle for memory ?
>>
>>
> You should configure both Spark and Ignite so that they never try to
> consume more memory than physically available, also leaving some for OS.
> This way there will be no conflict.
>
>>
>>-
>>- Should i give the memory to Ignite or Spark ?
>>
>>
> Again, this heavily depends on use case and on how heavily you use both
> Spark and Ignite.
>
>
>>-
>>- Would Spark even benefit from Ignite if the Ignite nodes would be
>>hostet on other machines ?
>>
>>
> There are definitely use cases when this can be useful. Although in others
> it is better to run Ignite separately.
>
>
>>-
>>
>>
>> We are currently having hundress of GB for analytics and we want to use
>> ignite to speed up things up.
>>
>> Thank you
>>
>>
>>
>>
>>
>>
>>
>>
>


Using ignite with spark

2017-09-21 Thread Patrick Brunmayr
Hello


   - What is currently the best practice of deploying Ignite with Spark ?

   - Should the Ignite node sit on the same machine as the Spark executor ?


According to this documentation
 Spark
should be given 75% of machine memory but what is left for Ignite then ?

In general, Spark can run well with anywhere from *8 GB to hundreds of
> gigabytes* of memory per machine. In all cases, we recommend allocating
> only at most 75% of the memory for Spark; leave the rest for the operating
> system and buffer cache.



   - Don't they battle for memory ?

   - Should i give the memory to Ignite or Spark ?

   - Would Spark even benefit from Ignite if the Ignite nodes would be
   hostet on other machines ?


We are currently having hundress of GB for analytics and we want to use
ignite to speed up things up.

Thank you


Re: updating key object field using ignite sql query

2017-08-30 Thread Denis Magda
Hi,

Ignite prevents you to update the key or its field. Look for "Inability to 
modify a key or its fields with an UPDATE query” callout at the bottom of this 
section to find a reasoning behind this: 
https://apacheignite.readme.io/docs/dml#section-update 
<https://apacheignite.readme.io/docs/dml#section-update>

As a side note, even if this was allowed, instead of “_key.id” you should have 
used just “id”.


—
Denis

> On Aug 30, 2017, at 12:31 AM, kotamrajuyashasvi <kotamrajuyasha...@gmail.com> 
> wrote:
> 
> Hi
> 
> In my ignite application I am using a cache with an object/POJO as a cache
> key. Now how to update the key fields of cache key using ignite sql queries.
> When I try to update I am getting 'Failed to parse query' error.
> 
> 
> For example my cache stores value in person [fields: id,name,phno] and key
> in personpk [fields:id,phno] POJO classes respectively. Now if I execute the
> query "update person set _key.id = 1 where id=2" I am getting error. I have
> added @QuerySqlField annotations for all fields in person and personpk POJO
> classes.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



updating key object field using ignite sql query

2017-08-30 Thread kotamrajuyashasvi
Hi

In my ignite application I am using a cache with an object/POJO as a cache
key. Now how to update the key fields of cache key using ignite sql queries.
When I try to update I am getting 'Failed to parse query' error.


For example my cache stores value in person [fields: id,name,phno] and key
in personpk [fields:id,phno] POJO classes respectively. Now if I execute the
query "update person set _key.id = 1 where id=2" I am getting error. I have
added @QuerySqlField annotations for all fields in person and personpk POJO
classes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: History, multiple source systems, Data Vault using Ignite...

2017-08-29 Thread Evgenii Zhuravlev
Hi,

Yes, it's possible to implement storage for previous values in Ignite. For
example, you can use listener of ContinuousQuery with @IgniteAsyncCallback
for updating values in your cache. When value updated, you can put previous
value(which will be available in this event listener) in another cache with
previous values and add to it version of the object(for example time of
updating). Documentation about ContinuousQuery: https://apacheignite.readme.
io/docs/continuous-queries
<https://apacheignite.readme.io/docs/continuous-queries>
Or, if you will need more guarantees, you can use transactions for
inserting previous value to history-cache and updating value in one
transaction. https://apacheignite.readme.io/docs/transactions

Also, It looks like you will need to use Ignite Persistence:
https://apacheignite.readme.io/docs/distributed-persistent-store

If you will have any certain questions about implementation, feel free to
send it to the user list.

All the best,
Evgenii




2017-08-28 14:55 GMT+03:00 Mikhail <wmas...@mail.ru>:

> Hello,
>
>   We have typical task: we need to implement application,
> which will receive data (and updates) from multiple source systems. Also
> there will be default (our) data source, which can be updated by our
> application. Only the last version of data should be "actual" one, which
> could be retrieved from our application. But full audit trails of updates
> from every system should be kept always in order to investigate issues. Now
> the team consider Data Vault [1] as one of the possible solutions (but for
> me it looks superfluous). Is it a good option to implement Data Vault
> architecture by means of Ignite? Have anybody implemented applications with
> such requirements? We want to use Ignite, because in future we will have
> data analytics process (machine learning). What possible solutions for the
> task using Ignite do you see?
>
> [1] https://en.wikipedia.org/wiki/Data_vault_modeling
> --
> Best Regards,
> Mikhail
>


History, multiple source systems, Data Vault using Ignite...

2017-08-28 Thread Mikhail
Hello,

              We have typical task: we need to implement application, which 
will receive data (and updates) from multiple source systems. Also there will 
be default (our) data source, which can be updated by our application. Only the 
last version of data should be "actual" one, which could be retrieved from our 
application. But full audit trails of updates from every system should be kept 
always in order to investigate issues. Now the team consider Data Vault [1] as 
one of the possible solutions (but for me it looks superfluous). Is it a good 
option to implement Data Vault architecture by means of Ignite? Have anybody 
implemented applications with such requirements? We want to use Ignite, because 
in future we will have data analytics process (machine learning). What possible 
solutions for the task using Ignite do you see?

[1]  https://en.wikipedia.org/wiki/Data_vault_modeling  
--
Best Regards,
Mikhail

Re: Using Ignite as the SQL Engine for Cassandra

2017-07-11 Thread Igor Rudyak
Hi Roger,

You can use Ignite-Cassandra integration module. In case you need to load
specific portion of you Cassandra dataset into Ignite you can use
*loadCache(...)* method of Ignite cache API, providing it appropriate CQL
query.

Igor

On Tue, Jul 11, 2017 at 8:29 PM, Roger Fischer (CW) 
wrote:

> Hello,
>
>
>
> I have seen the page on using Cassandra as the persistent store for
> Ignite. Are the same concepts / classes applicable when using Cassandra as
> the backing database?
>
>
>
> I have a large data set in Cassandra. At system start I want to load the
> most recent (ie. most used) data into Ignite. The application then performs
> SQL queries to Ignite. My persistence API will check if the data set is
> already in Ignite before submitting the query. When part of the data set is
> not in Ignite, the persistence API loads the needed data from Cassandra
> into Ignite first.
>
>
>
> Is there a plugin for Ignite that gets me started on this? Or are there
> interfaces or classes that I need to implement / extend?
>
>
>
> I am looking for some initial pointers (ie. where to get started).
>
>
>
> Thanks…
>
>
>
> Roger
>
>
>


Using Ignite as the SQL Engine for Cassandra

2017-07-11 Thread Roger Fischer (CW)
Hello,

I have seen the page on using Cassandra as the persistent store for Ignite. Are 
the same concepts / classes applicable when using Cassandra as the backing 
database?

I have a large data set in Cassandra. At system start I want to load the most 
recent (ie. most used) data into Ignite. The application then performs SQL 
queries to Ignite. My persistence API will check if the data set is already in 
Ignite before submitting the query. When part of the data set is not in Ignite, 
the persistence API loads the needed data from Cassandra into Ignite first.

Is there a plugin for Ignite that gets me started on this? Or are there 
interfaces or classes that I need to implement / extend?

I am looking for some initial pointers (ie. where to get started).

Thanks...

Roger



Error using ignite cache with Zookeeper for Node Discovery

2017-07-04 Thread Venkat Raman
Hi All,

I am using Ignite cache on two node cluster with Zookeeper for node
discovery. I see the following error while trying to update Cache entry
using a key. I am using Ignite as an embedded cache inside an Tomcat based
web server. Below error does not happen immediately after the tomcat/java
process has startedm but starts happening after a while.

I would really appreciate any pointers on how to debug this issue.

class org.apache.ignite.IgniteException: Runtime failure on search row:
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$SearchRow@1c2ad2e8
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1615)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:925)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:326)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2386)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1792)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1162)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:651)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2345)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2322)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1519)


*Caused by: java.lang.IllegalMonitorStateException: Attempted to release
write lock while not holding it [lock=7f399844d470,
state=000103380883*
at
org.apache.ignite.internal.util.OffheapReadWriteLock.writeUnlock(OffheapReadWriteLock.java:259)
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeUnlock(PageMemoryNoStoreImpl.java:495)
at
org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writeUnlock(PageHandler.java:379)
at
org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writePage(PageHandler.java:288)
at
org.apache.ignite.internal.processors.cache.database.DataStructure.write(DataStructure.java:241)
at
org.apache.ignite.internal.processors.cache.database.freelist.FreeListImpl.updateDataRow(FreeListImpl.java:506)
at
org.apache.ignite.internal.processors.cache.database.RowStore.updateRow(RowStore.java:82)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:970)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4428)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4226)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3966)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2966)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2860)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invokeDown(BPlusTree.java:1702)
at
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1585)
... 64 more

Regards,
Venkat


Re: Unable to connect and load data from Oracle, using Ignite V2.0

2017-05-27 Thread afedotov
Hi.

Just in case, please try calling Class.forName ("oracle.jdbc.OracleDriver")
or
DriverManager.registerDriver (new oracle.jdbc.OracleDriver())


Kind regards,
Alex.

On Sat, May 27, 2017 at 5:18 PM, Pratham Joshi [via Apache Ignite Users] <
ml+s70518n13182...@n6.nabble.com> wrote:

> Thanks for your reply @afedotov.
> I already have required jdbc jar file in my project's classpath. But still
> the error occurs.
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Unable-to-
> connect-and-load-data-from-Oracle-using-Ignite-V2-0-tp13171p13182.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=1=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-and-load-data-from-Oracle-using-Ignite-V2-0-tp13171p13184.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Unable to connect and load data from Oracle, using Ignite V2.0

2017-05-27 Thread Pratham Joshi
Thanks for your reply @afedotov.
I already have required jdbc jar file in my project's classpath. But still
the error occurs.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-and-load-data-from-Oracle-using-Ignite-V2-0-tp13171p13182.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Unable to connect and load data from Oracle, using Ignite V2.0

2017-05-27 Thread afedotov
Hello.

Make sure you have a proper jar containing the driver on the classpath.
The jar probably is something like ojdbc8.jar depending on the Oracle
version you are using.

Kind regards,
Alex

26 мая 2017 г. 5:15 PM пользователь "Pratham Joshi [via Apache Ignite
Users]" <ml+s70518n13171...@n6.nabble.com> написал:

 I am trying to configure and read data from existing oracle tables
However I get error message while calling *ignite.loadCache();* Meaasge as
==*Failed to start store
session:javax.cache.integration.CacheWriterException: Failed to start store
session [tx=null] Caused by: java.sql.SQLException: No suitable driver
found for jdbc:oracle:thin:@192.168.2.
<jdbc%3Aoracle%3Athin%3A@192.168.2.>218:1521:xe at
org.h2.jdbcx.JdbcDataSource.getJdbcConnection(JdbcDataSource.java:190) at
org.h2.jdbcx.JdbcDataSource.getXAConnection(JdbcDataSource.java:351) at
org.h2.jdbcx.JdbcDataSource.getPooledConnection(JdbcDataSource.java:383) at
org.h2.jdbcx.JdbcConnectionPool.getConnectionNow(JdbcConnectionPool.java:226)
at
org.h2.jdbcx.JdbcConnectionPool.getConnection(JdbcConnectionPool.java:198)
at
org.apache.ignite.cache.store.jdbc.CacheJdbcStoreSessionListener.onSessionStart(CacheJdbcStoreSessionListener.java:112)*
I have configured CacheStore for TempClass also as shown in
https://apacheignite.readme.io/docs/persistent-store#cachestore
<http://apacheignite.readme.io/docs/persistent-store#cachestore> Any help
will be highly appreciated Following is my configuration
CacheConfiguration<String, TempClass> cacheCfg = new
CacheConfiguration<String, TempClass>();
cacheCfg.setName("RevSenseTest_CacheConfig");
IgniteConfiguration igniteConfig = new IgniteConfiguration(); Factory
factory = FactoryBuilder.factoryOf(TempClassCacheStore.class);
cacheCfg.setReadThrough(true); cacheCfg.setWriteThrough(true);
cacheCfg.setIndexedTypes(String.class, TempClass.class);
cacheCfg.setCacheStoreFactory(factory); cacheCfg.
setCacheStoreSessionListenerFactories(new Factory() { @Override public
CacheStoreSessionListener create() { CacheJdbcStoreSessionListener lsnr =
new CacheJdbcStoreSessionListener(); lsnr.setDataSource(
JdbcConnectionPool.create("jdbc:oracle:thin:@192.168.2.218:1521:xe",
"test", "test")); return lsnr; } }); Ignite ignite =
Ignition.start(igniteConfig); IgniteCache<String, TempClass> cache =
ignite.getOrCreateCache(cacheCfg); cache.loadCache(null); SqlFieldsQuery
sql = new SqlFieldsQuery("SELECT ID_, NAME_ FROM ACT_HI_TASKINST");
QueryCursor<List> cursor = cache.query(sql); 

--
If you reply to this email, your message will be added to the discussion
below:
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-
connect-and-load-data-from-Oracle-using-Ignite-V2-0-tp13171.html
To start a new topic under Apache Ignite Users, email
ml+s70518n1...@n6.nabble.com
To unsubscribe from Apache Ignite Users, click here
<http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=1=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
.
NAML
<http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-and-load-data-from-Oracle-using-Ignite-V2-0-tp13171p13181.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: using ignite web console

2017-05-17 Thread Andrey Novikov
Hi,

Note your are trying to use development mode in production!
To bind development web server to all interfaces on your linux machine you
need fix following file:
modules/web-console/frontend/gulpfile.babel.js/webpack/environments/development.js
like this
https://github.com/apache/ignite/blob/master/modules/web-console/frontend/gulpfile.babel.js/webpack/environments/development.js
or use sources from master branch.


Also web console may be started from prepared docker image:
https://hub.docker.com/r/apacheignite/web-console-standalone/



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/using-ignite-web-console-tp12847p12964.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: using ignite web console

2017-05-15 Thread Neeraj Bhatt
Hi Denis

Thanks for information, ignite web agent default. properties details are
documented in the link you shared
Can you please give name of property files of back end /frontend which need
to be changed as this is not documented anywhere.

Thanks



On Tue, May 16, 2017 at 5:31 AM, Denis Magda <dma...@apache.org> wrote:

> Yes, you need to give the machine where the console is to be deployed a
> unique IP address so that it’s accessible from remote machines. Also you
> might need to tweak some of web agent’s configuration parameters if the
> console should be linked with a remote Ignite cluster:
> https://apacheignite-tools.readme.io/v2.0/docs/getting-
> started#section-ignite-web-agent
>
> —
> Denis
>
> > On May 15, 2017, at 6:11 AM, neerajbhatt <neerajbhatt2...@gmail.com>
> wrote:
> >
> > I am trying to install ignite web console in one of our servers (linux)as
> > given in https://apacheignite-tools.readme.io/v1.9/docs/build-and-deploy
> >
> > We will be accessing the web console from different windows machine
> browser
> >
> > Do we need server ip address while building front end or back end as we
> > won't be hitting  http://localhost:9000
> > but  http://:9000
> >
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/using-ignite-web-console-tp12847.html
> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>


Re: using ignite web console

2017-05-15 Thread Denis Magda
Yes, you need to give the machine where the console is to be deployed a unique 
IP address so that it’s accessible from remote machines. Also you might need to 
tweak some of web agent’s configuration parameters if the console should be 
linked with a remote Ignite cluster:
https://apacheignite-tools.readme.io/v2.0/docs/getting-started#section-ignite-web-agent

—
Denis

> On May 15, 2017, at 6:11 AM, neerajbhatt <neerajbhatt2...@gmail.com> wrote:
> 
> I am trying to install ignite web console in one of our servers (linux)as
> given in https://apacheignite-tools.readme.io/v1.9/docs/build-and-deploy
> 
> We will be accessing the web console from different windows machine browser
> 
> Do we need server ip address while building front end or back end as we
> won't be hitting  http://localhost:9000 
> but  http://:9000
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/using-ignite-web-console-tp12847.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



using ignite web console

2017-05-15 Thread neerajbhatt
I am trying to install ignite web console in one of our servers (linux)as
given in https://apacheignite-tools.readme.io/v1.9/docs/build-and-deploy

We will be accessing the web console from different windows machine browser

Do we need server ip address while building front end or back end as we
won't be hitting  http://localhost:9000 
but  http://:9000




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/using-ignite-web-console-tp12847.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: OOM when using Ignite as HDFS Cache

2017-04-27 Thread Ivan Veselovsky
Hi, zhangshuai.ustc , 
is this problem solved? Can we help more on the subject?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p12297.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: OOM when using Ignite as HDFS Cache

2017-04-18 Thread zhangshuai.ustc
Yes, I'm getting "GC Overhead limit exceeded" OOME and I think this is an 
unexpected behavior. I'll try the off heap options days later. Thanks for your 
advice.

I'm providing HDFS server to our customers. As you know, HDFS is not friendly 
to many small files. We sometimes need to merge files to a large block to 
reduce metadata size. I think you are correct, the HDFS is facing the same 
issue comparing to Ignite. I need to configure the same max heap size for both 
of them.

-Original Message-
From: Kamil Misuth [mailto:ki...@ethome.sk] 
Sent: Tuesday, April 18, 2017 3:35 PM
To: user@ignite.apache.org
Subject: Re: OOM when using Ignite as HDFS Cache

Are you getting "GC Overhead limit exceeded" OOME?
I think you could always move IGFS data block cache off heap if it is not the 
case already.

I am wondering why you've set block size to 4 GB for Ignite when HDFS stock 
configured block size is either 64 MB or 128 MB. Have you tried to set HDFS 
block size to 4GB? I am guessing you would get OOME on HDFS data nodes too.

Kamil

Dňa 2017-04-14 08:50 张帅 napísal(a):
> I'm using the latest version of JDK, AKA. 1.8.0_121
> 
> The cache is aim to provide a faster read/write performance. But the 
> availability is more important. 1GB cache is for testing purpose. But 
> it's the same issue if I write a 1TB file to 64GB cache.
> 
> What I mean availability is that Ignite should not exit with OOME.
> Slow down write performance is kind of downgrade. If I write directly 
> to HDFS, I got a write performance of x MB/s. If I write through 
> Ignite, I got a higher performance y MB/s. It is great if y far more 
> larger than x, and also acceptable equal to x sometimes, but not 
> acceptable if HDFS still working but Ignite not working.
> 
> Breaking into small blocks is possible because data coming in a kind 
> of stream. We are always able to pack it whenever we collected 512MB 
> data.
> 
> This issue is not about Cache Eviction Strategy, but about how to 
> avoid OOME & service not available. Cache eviction would not solve it 
> because there do have more data than cache capacity.
> 
> 
> -Original Message-
> From: Jörn Franke [mailto:jornfra...@gmail.com]
> Sent: Friday, April 14, 2017 2:36 PM
> To: user@ignite.apache.org
> Subject: Re: OOM when using Ignite as HDFS Cache
> 
> I would not expect any of the things that you mention. A cache is not 
> supposed to slow down writing. This does not make sense from my point 
> of view. Splitting a block into several smaller ones is also not 
> feasible. The data has to go somewhere before splitting.
> 
> I think what you refer to is certain cache eviction strategies.
> 1 GB of cache sounds small for a HDFS cache.
> I suggest to enable the default configuration of ignite on HDFS and 
> then change it step by step to your envisioned configuration.
> 
> That being said, a Hadoop platform with a lot of ecosystem components 
> can be complex, in particular you need to calculate that each of the 
> components (hive, spark etc) has certain memory assigned or has it 
> used when jobs are running. So even if you have configured 1 gb 
> somebody else might have taken it. Less probable but possible is that 
> your JDK has a bug leading to OOME. You may also try to upgrade it.
> 
>> On 14. Apr 2017, at 08:12, <zhangshuai.u...@gmail.com> 
>> <zhangshuai.u...@gmail.com> wrote:
>> 
>> I think it's a kind of misconfiguration. The Ignite document just 
>> mentioned about how to configuration HDFS as a secondary filesystem 
>> but nothing about how to restrict the memory usage to avoid OOME.
>> https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system
>> 
>> Assume I configured the max JVM heap size to 1GB.
>> 1. What would happen if I write very fast before Ignite write data to 
>> HDFS asynchronized?
>> 2. What would happen if I want to write a 2GB file block to Ignite?
>> 
>> I expected:
>> 1. Ignite would slow down the write performance to avoid OOME.
>> 2. Ignite would break the 2GB file block into 512MB blocks & write 
>> them to HDFS to avoid OOME.
>> 
>> Do we have configurations against above behaviors? I dig some items 
>> from source code & Ignite Web Console, but seems they are not working 
>> fine.
>> 
>>  > name="dualModeMaxPendingPutsSize" value="10"/> > name="blockSize" value="536870912"/> > value="131072"/>  
>>  > name="prefetchBlocks" value="2"/> > name="sequentialReadsBeforePrefetch" value="5"/> > name="defaultMode" value="DUAL_ASYNC" />
>> 
>> I also notice that I

Re: OOM when using Ignite as HDFS Cache

2017-04-18 Thread Kamil Misuth

Are you getting "GC Overhead limit exceeded" OOME?
I think you could always move IGFS data block cache off heap if it is 
not the case already.


I am wondering why you've set block size to 4 GB for Ignite when HDFS 
stock configured block size is either 64 MB or 128 MB. Have you tried to 
set HDFS block size to 4GB? I am guessing you would get OOME on HDFS 
data nodes too.


Kamil

Dňa 2017-04-14 08:50 张帅 napísal(a):

I'm using the latest version of JDK, AKA. 1.8.0_121

The cache is aim to provide a faster read/write performance. But the
availability is more important. 1GB cache is for testing purpose. But
it's the same issue if I write a 1TB file to 64GB cache.

What I mean availability is that Ignite should not exit with OOME.
Slow down write performance is kind of downgrade. If I write directly
to HDFS, I got a write performance of x MB/s. If I write through
Ignite, I got a higher performance y MB/s. It is great if y far more
larger than x, and also acceptable equal to x sometimes, but not
acceptable if HDFS still working but Ignite not working.

Breaking into small blocks is possible because data coming in a kind
of stream. We are always able to pack it whenever we collected 512MB
data.

This issue is not about Cache Eviction Strategy, but about how to
avoid OOME & service not available. Cache eviction would not solve it
because there do have more data than cache capacity.


-Original Message-
From: Jörn Franke [mailto:jornfra...@gmail.com]
Sent: Friday, April 14, 2017 2:36 PM
To: user@ignite.apache.org
Subject: Re: OOM when using Ignite as HDFS Cache

I would not expect any of the things that you mention. A cache is not
supposed to slow down writing. This does not make sense from my point
of view. Splitting a block into several smaller ones is also not
feasible. The data has to go somewhere before splitting.

I think what you refer to is certain cache eviction strategies.
1 GB of cache sounds small for a HDFS cache.
I suggest to enable the default configuration of ignite on HDFS and
then change it step by step to your envisioned configuration.

That being said, a Hadoop platform with a lot of ecosystem components
can be complex, in particular you need to calculate that each of the
components (hive, spark etc) has certain memory assigned or has it
used when jobs are running. So even if you have configured 1 gb
somebody else might have taken it. Less probable but possible is that
your JDK has a bug leading to OOME. You may also try to upgrade it.

On 14. Apr 2017, at 08:12, <zhangshuai.u...@gmail.com> 
<zhangshuai.u...@gmail.com> wrote:


I think it's a kind of misconfiguration. The Ignite document just
mentioned about how to configuration HDFS as a secondary filesystem
but nothing about how to restrict the memory usage to avoid OOME.
https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system

Assume I configured the max JVM heap size to 1GB.
1. What would happen if I write very fast before Ignite write data to 
HDFS asynchronized?

2. What would happen if I want to write a 2GB file block to Ignite?

I expected:
1. Ignite would slow down the write performance to avoid OOME.
2. Ignite would break the 2GB file block into 512MB blocks & write 
them to HDFS to avoid OOME.


Do we have configurations against above behaviors? I dig some items 
from source code & Ignite Web Console, but seems they are not working 
fine.



   

I also notice that Ignite write through file block size is set to 
64MB. I mean I write a file to Ignite with block size to 4GB, but I 
finally found it on HDFS with block size 64MB. Is there any 
configuration for it?


-Original Message-
From: dkarachentsev [mailto:dkarachent...@gridgain.com]
Sent: Thursday, April 13, 2017 11:21 PM
To: user@ignite.apache.org
Subject: Re: OOM when using Ignite as HDFS Cache

Hi Shuai,

Could you please take heap dump on OOME and find what objects consume 
memory? There would be a lot of byte[] objects, please find the 
nearest GC root for them.


Thanks!

-Dmitry.



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-a
s-HDFS-Cache-tp11900p11956.html Sent from the Apache Ignite Users
mailing list archive at Nabble.com.






RE: OOM when using Ignite as HDFS Cache

2017-04-14 Thread dkarachentsev
Hi,

The correct way here would be to understand where actually problem occurs
and after that make decisions on how to solve it.

> I also notice that Ignite write through file block size is set to 64MB. I
> mean I write a file to Ignite with block size to 4GB, but I finally found
> it on HDFS with block size 64MB. Is there any configuration for it? 

I'm not sure I understand your question, but block size for HDFS is
configured in Hadoop config file.

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p11974.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: OOM when using Ignite as HDFS Cache

2017-04-14 Thread 张帅
I'm using the latest version of JDK, AKA. 1.8.0_121

The cache is aim to provide a faster read/write performance. But the 
availability is more important. 1GB cache is for testing purpose. But it's the 
same issue if I write a 1TB file to 64GB cache.

What I mean availability is that Ignite should not exit with OOME. Slow down 
write performance is kind of downgrade. If I write directly to HDFS, I got a 
write performance of x MB/s. If I write through Ignite, I got a higher 
performance y MB/s. It is great if y far more larger than x, and also 
acceptable equal to x sometimes, but not acceptable if HDFS still working but 
Ignite not working.

Breaking into small blocks is possible because data coming in a kind of stream. 
We are always able to pack it whenever we collected 512MB data.

This issue is not about Cache Eviction Strategy, but about how to avoid OOME & 
service not available. Cache eviction would not solve it because there do have 
more data than cache capacity.


-Original Message-
From: Jörn Franke [mailto:jornfra...@gmail.com] 
Sent: Friday, April 14, 2017 2:36 PM
To: user@ignite.apache.org
Subject: Re: OOM when using Ignite as HDFS Cache

I would not expect any of the things that you mention. A cache is not supposed 
to slow down writing. This does not make sense from my point of view. Splitting 
a block into several smaller ones is also not feasible. The data has to go 
somewhere before splitting. 

I think what you refer to is certain cache eviction strategies.
1 GB of cache sounds small for a HDFS cache.
I suggest to enable the default configuration of ignite on HDFS and then change 
it step by step to your envisioned configuration.

That being said, a Hadoop platform with a lot of ecosystem components can be 
complex, in particular you need to calculate that each of the components (hive, 
spark etc) has certain memory assigned or has it used when jobs are running. So 
even if you have configured 1 gb somebody else might have taken it. Less 
probable but possible is that your JDK has a bug leading to OOME. You may also 
try to upgrade it.

> On 14. Apr 2017, at 08:12, <zhangshuai.u...@gmail.com> 
> <zhangshuai.u...@gmail.com> wrote:
> 
> I think it's a kind of misconfiguration. The Ignite document just 
> mentioned about how to configuration HDFS as a secondary filesystem 
> but nothing about how to restrict the memory usage to avoid OOME. 
> https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system
> 
> Assume I configured the max JVM heap size to 1GB.
> 1. What would happen if I write very fast before Ignite write data to HDFS 
> asynchronized?
> 2. What would happen if I want to write a 2GB file block to Ignite?
> 
> I expected:
> 1. Ignite would slow down the write performance to avoid OOME.
> 2. Ignite would break the 2GB file block into 512MB blocks & write them to 
> HDFS to avoid OOME.
> 
> Do we have configurations against above behaviors? I dig some items from 
> source code & Ignite Web Console, but seems they are not working fine. 
> 
>   name="dualModeMaxPendingPutsSize" value="10"/>  name="blockSize" value="536870912"/>  value="131072"/>  
>   name="prefetchBlocks" value="2"/>  name="sequentialReadsBeforePrefetch" value="5"/>  name="defaultMode" value="DUAL_ASYNC" />
> 
> I also notice that Ignite write through file block size is set to 64MB. I 
> mean I write a file to Ignite with block size to 4GB, but I finally found it 
> on HDFS with block size 64MB. Is there any configuration for it?
> 
> -Original Message-
> From: dkarachentsev [mailto:dkarachent...@gridgain.com]
> Sent: Thursday, April 13, 2017 11:21 PM
> To: user@ignite.apache.org
> Subject: Re: OOM when using Ignite as HDFS Cache
> 
> Hi Shuai,
> 
> Could you please take heap dump on OOME and find what objects consume memory? 
> There would be a lot of byte[] objects, please find the nearest GC root for 
> them.
> 
> Thanks!
> 
> -Dmitry.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-a
> s-HDFS-Cache-tp11900p11956.html Sent from the Apache Ignite Users 
> mailing list archive at Nabble.com.
> 



Re: OOM when using Ignite as HDFS Cache

2017-04-14 Thread Jörn Franke
I would not expect any of the things that you mention. A cache is not supposed 
to slow down writing. This does not make sense from my point of view. Splitting 
a block into several smaller ones is also not feasible. The data has to go 
somewhere before splitting. 

I think what you refer to is certain cache eviction strategies.
1 GB of cache sounds small for a HDFS cache.
I suggest to enable the default configuration of ignite on HDFS and then change 
it step by step to your envisioned configuration.

That being said, a Hadoop platform with a lot of ecosystem components can be 
complex, in particular you need to calculate that each of the components (hive, 
spark etc) has certain memory assigned or has it used when jobs are running. So 
even if you have configured 1 gb somebody else might have taken it. Less 
probable but possible is that your JDK has a bug leading to OOME. You may also 
try to upgrade it.

> On 14. Apr 2017, at 08:12, <zhangshuai.u...@gmail.com> 
> <zhangshuai.u...@gmail.com> wrote:
> 
> I think it's a kind of misconfiguration. The Ignite document just mentioned 
> about how to configuration HDFS as a secondary filesystem but nothing about 
> how to restrict the memory usage to avoid OOME. 
> https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system
> 
> Assume I configured the max JVM heap size to 1GB.
> 1. What would happen if I write very fast before Ignite write data to HDFS 
> asynchronized?
> 2. What would happen if I want to write a 2GB file block to Ignite?
> 
> I expected:
> 1. Ignite would slow down the write performance to avoid OOME.
> 2. Ignite would break the 2GB file block into 512MB blocks & write them to 
> HDFS to avoid OOME.
> 
> Do we have configurations against above behaviors? I dig some items from 
> source code & Ignite Web Console, but seems they are not working fine. 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I also notice that Ignite write through file block size is set to 64MB. I 
> mean I write a file to Ignite with block size to 4GB, but I finally found it 
> on HDFS with block size 64MB. Is there any configuration for it?
> 
> -Original Message-
> From: dkarachentsev [mailto:dkarachent...@gridgain.com] 
> Sent: Thursday, April 13, 2017 11:21 PM
> To: user@ignite.apache.org
> Subject: Re: OOM when using Ignite as HDFS Cache
> 
> Hi Shuai,
> 
> Could you please take heap dump on OOME and find what objects consume memory? 
> There would be a lot of byte[] objects, please find the nearest GC root for 
> them.
> 
> Thanks!
> 
> -Dmitry.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p11956.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 


RE: OOM when using Ignite as HDFS Cache

2017-04-14 Thread zhangshuai.ustc
I think it's a kind of misconfiguration. The Ignite document just mentioned 
about how to configuration HDFS as a secondary filesystem but nothing about how 
to restrict the memory usage to avoid OOME. 
https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system

Assume I configured the max JVM heap size to 1GB.
1. What would happen if I write very fast before Ignite write data to HDFS 
asynchronized?
2. What would happen if I want to write a 2GB file block to Ignite?

I expected:
1. Ignite would slow down the write performance to avoid OOME.
2. Ignite would break the 2GB file block into 512MB blocks & write them to HDFS 
to avoid OOME.

Do we have configurations against above behaviors? I dig some items from source 
code & Ignite Web Console, but seems they are not working fine. 











I also notice that Ignite write through file block size is set to 64MB. I mean 
I write a file to Ignite with block size to 4GB, but I finally found it on HDFS 
with block size 64MB. Is there any configuration for it?

-Original Message-
From: dkarachentsev [mailto:dkarachent...@gridgain.com] 
Sent: Thursday, April 13, 2017 11:21 PM
To: user@ignite.apache.org
Subject: Re: OOM when using Ignite as HDFS Cache

Hi Shuai,

Could you please take heap dump on OOME and find what objects consume memory? 
There would be a lot of byte[] objects, please find the nearest GC root for 
them.

Thanks!

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p11956.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Error During data Loading using Ignite Data Streamer in Parallel

2017-04-13 Thread vdpyatkov
Hi,

Ignite DataStreamer with IgniteDataStreamer#allowOverwrite is false does not
work correct on instable topology until latest version.

If you want to Failover SPI will be used, you can set the property
(IgniteDataStreamer#allowOverwrite) is true.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-During-data-Loading-using-Ignite-Data-Streamer-in-Parallel-tp11912p11966.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: OOM when using Ignite as HDFS Cache

2017-04-13 Thread dkarachentsev
Hi Shuai,

Could you please take heap dump on OOME and find what objects consume
memory? There would be a lot of byte[] objects, please find the nearest GC
root for them.

Thanks!

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p11956.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: OOM when using Ignite as HDFS Cache

2017-04-12 Thread 张帅
Ping…

 

From: 张帅 [mailto:satan.stud...@gmail.com] On Behalf Of zhangshuai.u...@gmail.com
Sent: Wednesday, April 12, 2017 5:29 PM
To: user@ignite.apache.org
Subject: OOM when using Ignite as HDFS Cache

 

Hi there,

 

I’d like to use Ignite as HDFS Cache in my cluster but failed with OOM error. 
Could you help to review my configuration to help avoid it?

 

I’m using DUAL_ASYNC mode. The Ignite nodes can find each other to establish 
the cluster. There are very few changes in default-config.xml but attached for 
your review. The JVM heap size is limited to 1GB. The Ignite suffers from OOM 
exception when I’m running Hadoop benchmark TestDFSIO writing 4*4GB files. I 
think writing 4GB file to HDFS is in streaming so Ignite should work with it. 
It’s acceptable to slow down the write performance to wait Ignite write cached 
data to HDFS but not acceptable to lead crash or data lost.

 

The ignite log is attached as ignite_log.zip, pick some key messages here:

 

17/04/12 00:49:17 INFO [grid-timeout-worker-#19%null%] internal.IgniteKernal: 

Metrics for local node (to disable set 'metricsLogFrequency' to 0)

^-- Node [id=9b5dcc35, name=null, uptime=00:26:00:254]

^-- H/N/C [hosts=173, nodes=173, CPUs=2276]

^-- CPU [cur=0.13%, avg=0.82%, GC=0%]

^-- Heap [used=555MB, free=43.3%, comm=979MB]

^-- Non heap [used=61MB, free=95.95%, comm=62MB]

^-- Public thread pool [active=0, idle=0, qSize=0]

^-- System thread pool [active=0, idle=6, qSize=0]

^-- Outbound messages queue [size=0]

17/04/12 00:50:06 INFO [disco-event-worker-#35%null%] 
discovery.GridDiscoveryManager: Added new node to topology: TcpDiscoveryNode 
[id=553b5c1a-da0b-43cb-b691-b842352b3105, addrs=[0:0:0:0:0:0:0:1, 
10.152.133.46, 10.55.68.223, 127.0.0.1, 192.168.1.1], 
sockAddrs=[BN1APS0A98852E/10.152.133.46:47500, 
bn1sch010095221.phx.gbl/10.55.68.223:47500, /0:0:0:0:0:0:0:1:47500, 
/192.168.1.1:47500, /127.0.0.1:47500], discPort=47500, order=176, intOrder=175, 
lastExchangeTime=1491983403106, loc=false, ver=2.0.0#20170405-sha1:2c830b0d, 
isClient=false]

[00:50:06] Topology snapshot [ver=176, servers=174, clients=0, CPUs=2288, 
heap=180.0GB]

...

Exception in thread "igfs-client-worker-2-#585%null%" 
java.lang.OutOfMemoryError: GC overhead limit exceeded

  at java.util.Arrays.copyOf(Arrays.java:3332)

  at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)

  at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)

  at java.lang.StringBuffer.append(StringBuffer.java:270)

  at java.io.StringWriter.write(StringWriter.java:112)

  at java.io.PrintWriter.write(PrintWriter.java:456)

  at java.io.PrintWriter.write(PrintWriter.java:473)

  at java.io.PrintWriter.print(PrintWriter.java:603)

  at java.io.PrintWriter.println(PrintWriter.java:756)

  at java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:764)

  at java.lang.Throwable.printStackTrace(Throwable.java:658)

  at java.lang.Throwable.printStackTrace(Throwable.java:721)

  at 
org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)

  at 
org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)

  at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)

  at org.apache.log4j.AsyncAppender.append(AsyncAppender.java:162)

  at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)

  at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)

  at org.apache.log4j.Category.callAppenders(Category.java:206)

  at org.apache.log4j.Category.forcedLog(Category.java:391)

  at org.apache.log4j.Category.error(Category.java:322)

  at org.apache.ignite.logger.log4j.Log4JLogger.error(Log4JLogger.java:495)

  at org.apache.ignite.internal.GridLoggerProxy.error(GridLoggerProxy.java:148)

  at org.apache.ignite.internal.util.IgniteUtils.error(IgniteUtils.java:4281)

  at org.apache.ignite.internal.util.IgniteUtils.error(IgniteUtils.java:4306)

  at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:126)

  at java.lang.Thread.run(Thread.java:745)

Exception in thread "LeaseRenewer:had...@namenode-vip.yarn3-dev-bn2.bn2.ap.gbl" 
java.lang.OutOfMemoryError: GC overhead limit exceeded

Exception in thread 
"igfs-delete-worker%igfs%9b5dcc35-3a4c-4a90-ac9e-89fdd65302a7%" 
java.lang.OutOfMemoryError: GC overhead limit exceeded

Exception in thread "exchange-worker-#39%null%" java.lang.OutOfMemoryError: GC 
overhead limit exceeded

…

17/04/12 01:40:10 WARN [disco-event-worker-#35%null%] 
discovery.GridDiscoveryManager: Stopping local node according to configured 
segmentation policy.

 

Looking forward to your help.

 

 

Regards,

Shuai Zhang



Re: Error in executing hadoop job using ignite

2017-04-05 Thread Andrey Mashenkov
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


It looks like you have outdated "objectweb-asm" jar library in classpath.


You wrote:

Hi,
I am using ignite hadoop accelerator and hdfs as secondry file system. But
when I submit job using ignite configuration it show following error.
Please tell if you feel anything wrong.
]$ hadoop --config ~/ignite_conf jar
/app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
wordcount /PrashantSingh/1184-0.txt /output4tyy
Apr 04, 2017 3:33:30 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection

INFO: Client TCP connection established: hmaster/10.202.17.60:11211
Apr 04, 2017 3:33:30 PM
org.apache.ignite.internal.client.impl.GridClientImpl 
INFO: Client started [id=1ce64156-1137-4a77-bed3-d32b962ce3c4, protocol=TCP]
2017-04-04 15:33:31,688 INFO [main] input.FileInputFormat
(FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2017-04-04 15:33:32,202 INFO [main] mapreduce.JobSubmitter
(JobSubmitter.java:submitJobInternal(198)) - number of splits:1
2017-04-04 15:33:33,222 INFO [main] mapreduce.JobSubmitter
(JobSubmitter.java:printTokens(287)) - Submitting tokens for job:
job_6f75490d-9038-43af-93ba-3d06081f65d2_0002
2017-04-04 15:33:33,445 INFO [main] mapreduce.Job (Job.java:submit(1294)) -
The url to track the job: N/A
2017-04-04 15:33:33,447 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1339)) - Running job:
job_6f75490d-9038-43af-93ba-3d06081f65d2_0002
java.io.IOException: Job tracker doesn't have any information about the
job: job_6f75490d-9038-43af-93ba-3d06081f65d2_0002
at
org.apache.ignite.internal.processors.hadoop.impl.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:192)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:604)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1349)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1311)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)


and at ignite node it shows following error
[15:33:33,408][ERROR][pub-#117%null%][HadoopJobTracker] Failed to submit
job: 6f75490d-9038-43af-93ba-3d06081f65d2_2
class org.apache.ignite.IgniteCheckedException: class
org.apache.ignite.IgniteException: null
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2JobResourceManager.prepareJobEnvironment(HadoopV2JobResourceManager.java:169)
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.initialize(HadoopV2Job.java:319)
at
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.job(HadoopJobTracker.java:1123)
at
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.submit(HadoopJobTracker.java:313)
at
org.apache.ignite.internal.processors.hadoop.HadoopProcessor.submit(HadoopProcessor.java:173)
at
org.apache.ignite.internal.processors.hadoop.HadoopImpl.submit(HadoopImpl.java:69)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:50)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:33)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolTaskAdapter$Job.execute(HadoopProtocolTaskAdapter.java:101)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:560)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6618)
at
org.apache.ignite.internal.processors.job.GridJobWorke

Re: Insert data in hdfs using ignite

2017-03-28 Thread vkulichenko
Prashant,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


Prashant Singh wrote
> This problem is resolved now. Please give me some pointer to java api and
> some java code example to perform read write operation. I want to insert
> data in hdfs using java api.
> 
> I am current able to put data in hdfs from command line

If you're using Hadoop Accelerator, you can continue using Hadoop API.
That's the whole point of this product - you just plug it in into Hadoop and
run your Hadoop application without changes. Having said that, you can try
any Hadoop example.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Insert-data-in-hdfs-using-ignite-tp11343p11521.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Insert data in hdfs using ignite

2017-03-23 Thread dkarachentsev
Hi Prashant,

Check if HADOOP_HOME set and add slash to the end of secondary file system
uri (). Also it's not
recommended to use hadoop-setup.sh script, follow instructions in readme
[1].

[1] https://apacheignite-fs.readme.io/docs/installing-on-apache-hadoop

-Dmitry.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Insert-data-in-hdfs-using-ignite-tp11343p11390.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MapReduce Job stuck when using ignite hadoop accelerator

2016-12-01 Thread Andrey Mashenkov
Hi Kaiming,

 ^-- Public thread pool [active=80, idle=0, qSize=944]
There are long queue and 80 busy threads that seemd do no progress.
It looks like all of threads are blocked. Please, attach a thread-dump.


On Tue, Nov 29, 2016 at 6:40 AM, Kaiming Wan <344277...@qq.com> wrote:

> I can find the WARN in logs. The stuck should be caused by a long running
> cache operations. How to locate what cause the long running cache
> operations. My map-reduce job can run successfully without ignite hadoop
> accelerator.
>
> [GridCachePartitionExchangeManager] Found long running cache operations,
> dump IO statistics.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/MapReduce-Job-stuck-when-using-ignite-
> hadoop-accelerator-tp9216p9251.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
С уважением,
Машенков Андрей Владимирович
Тел. +7-921-932-61-82

Best regards,
Andrey V. Mashenkov
Cerr: +7-921-932-61-82


Re: MapReduce Job stuck when using ignite hadoop accelerator

2016-11-28 Thread Kaiming Wan
I can find the WARN in logs. The stuck should be caused by a long running
cache operations. How to locate what cause the long running cache
operations. My map-reduce job can run successfully without ignite hadoop
accelerator. 

[GridCachePartitionExchangeManager] Found long running cache operations,
dump IO statistics.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-Job-stuck-when-using-ignite-hadoop-accelerator-tp9216p9251.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


MapReduce Job stuck when using ignite hadoop accelerator

2016-11-28 Thread Kaiming Wan
]
^-- CPU [cur=0.03%, avg=0.18%, GC=0%]
^-- Heap [used=19992MB, free=63.39%, comm=39744MB]
^-- Non heap [used=148MB, free=98.58%, comm=153MB]
^-- Public thread pool [active=80, idle=0, qSize=944]
^-- System thread pool [active=0, idle=80, qSize=0]
^-- Outbound messages queue [size=0]
[18:26:40,674][INFO ][grid-timeout-worker-#201%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=1988aad6, name=null, uptime=01:28:00:439]
^-- H/N/C [hosts=3, nodes=3, CPUs=120]
^-- CPU [cur=0.03%, avg=0.18%, GC=0%]
^-- Heap [used=1MB, free=63.38%, comm=39744MB]
^-- Non heap [used=148MB, free=98.58%, comm=153MB]
^-- Public thread pool [active=80, idle=0, qSize=944]
^-- System thread pool [active=0, idle=80, qSize=0]
^-- Outbound messages queue [size=0]




How to solve this problem and what cause this?


>From the log info, I find free heap space diminish by 0.02% every several
minutes. And "thread pool starvation detected" warnning appeared now and
then.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-Job-stuck-when-using-ignite-hadoop-accelerator-tp9216.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: [EXTERNAL] Re: Query on using Ignite as persistence data and processing layer

2016-11-09 Thread vkulichenko
No, this is not available yet. Here is the corresponding ticket:
https://issues.apache.org/jira/browse/IGNITE-961

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Query-on-using-Ignite-as-persistence-data-and-processing-layer-tp8775p8858.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Query on using Ignite as persistence data and processing layer

2016-11-08 Thread vkulichenko
Hi,

It's fine to use Ignite as the main and only data storage for your
application, but Ignite is not a persistence storage. Data is in memory, so
there is always a chance for data loss. If this is something that you can't
live with, then do not rip and replace, but use Ignite with a persistence
store underneath.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Query-on-using-Ignite-as-persistence-data-and-processing-layer-tp8775p8795.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Query on using Ignite as persistence data and processing layer

2016-11-08 Thread chevy
Hi,

 I am looking at a feasibility of using Ignite as a persistence layer
instead of a mySql/Postgres db where we do lot of processing before sending
data to our rest-api.

1. Is it good to use ignite as a storage?
2. Is it efficient to do so much processing of data in ignite?
3. What is the availability quotient of ignite/ probability of node going
down causing data loss?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Query-on-using-Ignite-as-persistence-data-and-processing-layer-tp8775.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Trouble with Using Ignite 1.8 ODBC Driver

2016-09-13 Thread Igor Sapego
You can use installer from here: [1].

What kind of problem do you mean?

[1] -
https://github.com/isapego/ignite/tree/ignite-3868/modules/platforms/cpp/odbc/install

Best Regards,
Igor

On Tue, Sep 13, 2016 at 4:57 PM, amitpa <ami...@nrifintech.com> wrote:

> Also is this a problem with Visual Studio 2015 and Ignite and it doesnt
> happen when we use other VS versions like 2010?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Trouble-with-Using-Ignite-1-8-ODBC-
> Driver-tp7656p7707.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


  1   2   >