Re: Ignite Cluster Error

2017-12-29 Thread Usman Waheed
Correction at my end, we increased the timeout's to see if it helps to
resolve our problem but no luck.
So we can set it back to the default settings.

I am also pasting some more settings:

While searching for a resolution, i stumbled upon:
https://issues.apache.org/jira/browse/IGNITE-6555 which i don't think is
related to my problem.


   - 
   
   
   
   
   
   
   
   
   
   
   









On Fri, Dec 29, 2017 at 12:42 PM, Evgenii Zhuravlev <
e.zhuravlev...@gmail.com> wrote:

> Hi,
>
> Why did you set so big timeouts? Why don't default timeouts work for you?
>
> Evgenii
>
> 2017-12-29 10:35 GMT+03:00 Usman Waheed :
>
>> Hi,
>>
>> We have deployed apache ignite fabric 2.3
>>
>> We get the below error when trying to run on more than 1 node.
>>
>>  GridTimeoutProcessor: Timeout has occurred: CancelableTask
>> [id=970ee7b2061-c1565aa4-510c-4046-9ebb-46efd861b4df,
>> endTime=1512558510454, period=5000, cancel=false,
>> task=org.apache.ignite.internal.processors.cache.query.
>> continuous.CacheContinuousQueryManager$BackupCleaner@2c898c3e]
>>
>>
>>
>> Code is running fine on one node when ever new node joins it is gives
>> above error. We are using the below properties for making the cluster. Any
>> pointers or help will be much appreciated.
>>
>>
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> > value="60" />
>>
>> > value="5" />
>>
>> > value="2" />
>>
>>
>>
>> 
>>
>> 
>>
>>
>>
>> 
>>
>> localIP:47500..47509
>>
>>
>>
>> remoteIP:47500..47509
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>>
>>
>


Re: Ignite Cluster Error

2017-12-29 Thread Evgenii Zhuravlev
Please provide logs from all nodes with -DIGNITE_QUIET=false for
investigation

Evgenii

2017-12-29 11:03 GMT+03:00 Usman Waheed :

> Correction at my end, we increased the timeout's to see if it helps to
> resolve our problem but no luck.
> So we can set it back to the default settings.
>
> I am also pasting some more settings:
>
> While searching for a resolution, i stumbled upon: https://issues.apache.
> org/jira/browse/IGNITE-6555 which i don't think is related to my problem.
>
>
>- 
>
>
>
>
>
>
>
>
>
>
>
>
> 
> 
> 
> 
> 
> 
> 
>
> On Fri, Dec 29, 2017 at 12:42 PM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> Why did you set so big timeouts? Why don't default timeouts work for you?
>>
>> Evgenii
>>
>> 2017-12-29 10:35 GMT+03:00 Usman Waheed :
>>
>>> Hi,
>>>
>>> We have deployed apache ignite fabric 2.3
>>>
>>> We get the below error when trying to run on more than 1 node.
>>>
>>>  GridTimeoutProcessor: Timeout has occurred: CancelableTask
>>> [id=970ee7b2061-c1565aa4-510c-4046-9ebb-46efd861b4df,
>>> endTime=1512558510454, period=5000, cancel=false,
>>> task=org.apache.ignite.internal.processors.cache.query.conti
>>> nuous.CacheContinuousQueryManager$BackupCleaner@2c898c3e]
>>>
>>>
>>>
>>> Code is running fine on one node when ever new node joins it is gives
>>> above error. We are using the below properties for making the cluster. Any
>>> pointers or help will be much appreciated.
>>>
>>>
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> >> value="60" />
>>>
>>> >> value="5" />
>>>
>>> >> value="2" />
>>>
>>>
>>>
>>> 
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>> localIP:47500..47509
>>>
>>>
>>>
>>> remoteIP:47500..47509
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>>
>>>
>>
>


Re: Ignite Cluster Error

2017-12-29 Thread Usman Waheed
Thanks Evgenii , will get back to this thread.

On Fri, Dec 29, 2017 at 1:09 PM, Evgenii Zhuravlev  wrote:

> Please provide logs from all nodes with -DIGNITE_QUIET=false for
> investigation
>
> Evgenii
>
> 2017-12-29 11:03 GMT+03:00 Usman Waheed :
>
>> Correction at my end, we increased the timeout's to see if it helps to
>> resolve our problem but no luck.
>> So we can set it back to the default settings.
>>
>> I am also pasting some more settings:
>>
>> While searching for a resolution, i stumbled upon:
>> https://issues.apache.org/jira/browse/IGNITE-6555 which i don't think is
>> related to my problem.
>>
>>
>>- 
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> On Fri, Dec 29, 2017 at 12:42 PM, Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Why did you set so big timeouts? Why don't default timeouts work for you?
>>>
>>> Evgenii
>>>
>>> 2017-12-29 10:35 GMT+03:00 Usman Waheed :
>>>
 Hi,

 We have deployed apache ignite fabric 2.3

 We get the below error when trying to run on more than 1 node.

  GridTimeoutProcessor: Timeout has occurred: CancelableTask
 [id=970ee7b2061-c1565aa4-510c-4046-9ebb-46efd861b4df,
 endTime=1512558510454, period=5000, cancel=false,
 task=org.apache.ignite.internal.processors.cache.query.conti
 nuous.CacheContinuousQueryManager$BackupCleaner@2c898c3e]



 Code is running fine on one node when ever new node joins it is gives
 above error. We are using the below properties for making the cluster. Any
 pointers or help will be much appreciated.



 

 

 

 

 >>> value="60" />

 >>> value="5" />

 >>> name="statisticsPrintFrequency" value="2" />



 

 



 

 localIP:47500..47509



 remoteIP:47500..47509

 

 

 

 

 

 



>>>
>>
>


Re: using JDBC with Ignite cluster, configured with persistent storage

2017-12-29 Thread Вячеслав Коптилин
Hello,

> Persistent storage means what I have to make cluster active before I can
do any DDL (or even request list of tables?)
If Ignite Native Persistence is used then you have to manually activate the
cluster [1].

> Error: Failed to handle JDBC request because node is stopping.
(state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
stopping.
Please make sure that the cluster is running and activated.

[1] https://apacheignite.readme.io/docs/cluster-activation

Thanks!


2017-12-05 1:45 GMT+03:00 soroka21 :

> Hi,
> I'm trying to use JDBC connection to run SQL on cluster, configured with
> Persistent storage.
> Persistent storage means what I have to make cluster active before I can do
> any DDL (or even request list of tables?)
> below is the output of sqlline, I'm trying to use, :
>
>
> 0: jdbc:ignite:thin://10.238.42.86/> !tables
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:671)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata.getTables(
> JdbcThinDatabaseMetadata.java:740)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sqlline.Reflector.invoke(Reflector.java:75)
> at sqlline.Commands.metadata(Commands.java:194)
> at sqlline.Commands.tables(Commands.java:332)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:791)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
>
> 0: jdbc:ignite:thin://10.238.42.86/> CREATE TABLE table4(_id varchar,F00
> varchar,F01 bigint,F02 double,F03 timestamp,F04 varchar,F05 bigint,F06
> double,F07 timestamp,F08 varchar,F09 bigint, PRIMARY KEY(_id)) WITH
> "cache_name=table4, value_type=table4";
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:671)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:130)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute(JdbcThinStatement.java:299)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
> Cluster configuration looks like this:
> 
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
> 
> xxx.xxx.xxx.xxx:47500..
> 47509
> yyy.yyy.yyy.yyy:47500..
> 47509
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
>
> 
>
>
> 
>
>
>  value="true"/>
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
>
>
> --
> Sent from

Re: Can the cache key and affinity key be different

2017-12-29 Thread slava.koptilin
Hi Sumanta,

> Is there any other way we can achieve the same w/o redefining cache key?
In that case, AffinityKeyMapper[1] can be used I think.
This pluggable API provides an ability to map cache key to an affinity key,
which is used in order to determine a server node on which this key will be
cached.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/AffinityKeyMapper.html

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Simple GETs/PUTs which one is better - Rest API, Thin , Thick OR Java API

2017-12-29 Thread Naveen
Thanks Evgenii

I am just trying to outline pros & cons of each with respect to my own
requirements.
I am reiterating my requirements
 1. Most of out ignite consumers can consume rest services only. 
 2. And, we are just OK to Ignite as simple data grid which is primarily
used to read and write data, no bulk processing, all it should serve is
quick reads (high read)
 3. Through put for read should be around 20K and above
 4. High Availability

1. REST API
Pros: Built-in REST API features to query the data from the cache (no need
of any external development). 

Cons: 
   a. Need to build POJO classes for each table and keep refreshing them as
and when your table structure changes 
   b. PUT does not work with REST API incase the value is a custom object
   c. No FT and NO load balancing features at the moment, we only have an
option to mention the discovery URL of one Ignite node as part of the URL
d. No idea how it performs compared with others protocols ??

2. Thin Driver

Pros: Everyone is comfortable with SQL, easy to code. Fully SQL compliant

Cons: Fail over is not possible, since we only can give one ignite node port
as part of the discovery URL. 

3. Thick Driver

Pros: Same as thin driver, everyone is comfortable with SQL, easy to code,
but some SQL features are not available. Load balancing & FB are available,
since we use the config XML

Cons: Some of the SQL features are not available

4. Java API

Pros: Performance may be better compared to others.

Cons: Every time a new cache is created or existing cache is altered, we
need to refresh the POJOs and update the classed on all ignite nodes
classpath. 

Can you please put some light on this and suggest me the right approach and
the best practices for building the solution around ignite. 

And, wish you a very happy and prosperous new year to all the Igniters

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Simple GETs/PUTs which one is better - Rest API, Thin , Thick OR Java API

2017-12-29 Thread Evgenii Zhuravlev
>  a. Need to build POJO classes for each table and keep refreshing them as
and when your table structure changes

Have you tried to work with BinaryObjects here?

> d. No idea how it performs compared with others protocols ??

It can be easily measured for your certain case, just run the same
operation from the different APIs and measure it.

>Cons: Every time a new cache is created or existing cache is altered, we
need to refresh the POJOs and update the classed on all ignite nodes
classpath.

No, you can work without POJO at all,
https://apacheignite.readme.io/docs/binary-marshaller

Evgenii



2017-12-29 16:30 GMT+03:00 Naveen :

> Thanks Evgenii
>
> I am just trying to outline pros & cons of each with respect to my own
> requirements.
> I am reiterating my requirements
>  1. Most of out ignite consumers can consume rest services only.
>  2. And, we are just OK to Ignite as simple data grid which is primarily
> used to read and write data, no bulk processing, all it should serve is
> quick reads (high read)
>  3. Through put for read should be around 20K and above
>  4. High Availability
>
> 1. REST API
> Pros: Built-in REST API features to query the data from the cache (no need
> of any external development).
>
> Cons:
>a. Need to build POJO classes for each table and keep refreshing them as
> and when your table structure changes
>b. PUT does not work with REST API incase the value is a custom object
>c. No FT and NO load balancing features at the moment, we only have an
> option to mention the discovery URL of one Ignite node as part of the URL
> d. No idea how it performs compared with others protocols ??
>
> 2. Thin Driver
>
> Pros: Everyone is comfortable with SQL, easy to code. Fully SQL compliant
>
> Cons: Fail over is not possible, since we only can give one ignite node
> port
> as part of the discovery URL.
>
> 3. Thick Driver
>
> Pros: Same as thin driver, everyone is comfortable with SQL, easy to code,
> but some SQL features are not available. Load balancing & FB are available,
> since we use the config XML
>
> Cons: Some of the SQL features are not available
>
> 4. Java API
>
> Pros: Performance may be better compared to others.
>
> Cons: Every time a new cache is created or existing cache is altered, we
> need to refresh the POJOs and update the classed on all ignite nodes
> classpath.
>
> Can you please put some light on this and suggest me the right approach and
> the best practices for building the solution around ignite.
>
> And, wish you a very happy and prosperous new year to all the Igniters
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: using JDBC with Ignite cluster, configured with persistent storage

2017-12-29 Thread slava.koptilin
Hello!

I have just checked your configuration and the latest code base (Apache
Ignite master) and it works as expected.
In case of an inactive cluster, sqlline should report something like as
follows:

class org.apache.ignite.IgniteException: Can not perform the operation
because the cluster is inactive.
Note, that the cluster is considered inactive by default if Ignite
Persistent Store is used to let all the nodes join the cluster. To activate
the cluster call Ignite.active(true). (state=5,code=0)
java.sql.SQLException: class org.apache.ignite.IgniteException: Can not
perform the operation because the cluster is inactive. Note, that the
cluster is considered inactive by default if Ignite Persistent Store is used
to let all the node
s join the cluster. To activate the cluster call Ignite.active(true).
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:648)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)

So, it seems to me, that your issue does not relate to
activation/deactivation of the cluster.
Could you please reproduce the problem and provide us with the log files
from all participated nodes, especially from the node which is used by the
JDBC thin driver?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL and backing cache question

2017-12-29 Thread slava.koptilin
Hi Naveen,

Yes, you can get a whole key and value objects.
You need to do the following steps:

1. define a key and value types in the following way:
package com.ril.edif;
public class CityKey {
// Please make sure that the field, which is used as primary key,
annotated by AffinityKeyMapped,
// and uses upper case (it means that 'city_id' is not correct name)
@AffinityKeyMapped
private Long CITY_ID;

public CityKey(Long id) {
CITY_ID = id;
}

...
}

public class ValueType {
private String city_name;
private String state_name;

public ValueType(String city_name, String state_name) {
this.city_name = city_name;
this.state_name = state_name;
}
...
}

2. create 'city_table' and insert some value(s)
cache.query(new SqlFieldsQuery(
"CREATE TABLE city_details (city_id LONG PRIMARY KEY, city_name
VARCHAR, state_name VARCHAR) WITH \"" +
"cache_name=city_details, " +
"key_type=com.ril.edif.KeyType, " +
"value_type=com.ril.edif.ValueType\""));

SqlFieldsQuery query = new SqlFieldsQuery("INSERT INTO city_details
(city_id, city_name, state_name) VALUES (?, ?, ?)");
query.setArgs(1L, "Forest Hill", "California");
cache.query(query).getAll();

3. and final step
IgniteCache cache = ignite.cache("city_details");
ValueType value = cache.get(new KeyType(1));

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can the cache key and affinity key be different

2017-12-29 Thread vkulichenko
AffinityKeyMapper is deprecated and should not be used. I would recommend to
decouple data models for Ignite and JPA, and then properly optimize Ignite
model. Using the same model for two completely different tasks can be
tricky.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQL Schema Updates using Annotations

2017-12-29 Thread Cody Yancey
Suppose I have 5 service instances that all form an Ignite cluster (v2.1).
The POJO I have for a cache looks like:

public static class MyCachePojo {
@QuerySqlField(name="fieldA")
private java.lang.String fieldA;
@QuerySqlField(name="fieldB")
private java.lang.String fieldB;
@QuerySqlField(name="fieldC")
private java.lang.String fieldC;
}

And I am try to do a rolling update (zero downtime deploy) that adds a
field to this, and I want the field to be available from SQL.

public static class MyCachePojo {
@QuerySqlField(name="fieldA")
private java.lang.String fieldA;
@QuerySqlField(name="fieldB")
private java.lang.String fieldB;
@QuerySqlField(name="fieldC")
private java.lang.String fieldC;
@QuerySqlField(name="fieldD")
private java.lang.String fieldD;
}


When an instance starts up and joins the ignite cluster, it creates a
CacheConfiguration object. It then calls setIndexedTypes on the
CacheConfiguration object and uses it to getOrCreate the MyCachePojo cache.
The part I found surprising was that the new SQL configuration is ignored
when calling getOrCreate in the case where the cache already exists (such
as in a rolling upgrade).
Subsequent SQL queries involving fieldD will always fail with Column Not
Found. Standard put/get operations can serialize and deserialize the new
field just fine.
So my question is, is there a way I can get Ignite to always pick up new
columns on rolling restart without handcrafting DDL statements or using
clever reflection?


Thanks!
Cody


Spark data frames integration merged

2017-12-29 Thread Valentin Kulichenko
Igniters,

Great news! We completed and merged first part of integration with Spark
data frames [1]. It contains implementation of Spark data source which
allows to use DataFrame API to query Ignite data, as well as join it with
other data frames originated from different sources.

Next planned steps are the following:
- Implement custom execution strategy to avoid transferring data from
Ignite to Spark when possible [2]. This should give serious performance
improvement in cases when only Ignite tables participate in a query.
- Implement ability to save a data frame into Ignite via DataFrameWrite API
[3].

[1] https://issues.apache.org/jira/browse/IGNITE-3084
[2] https://issues.apache.org/jira/browse/IGNITE-7077
[3] https://issues.apache.org/jira/browse/IGNITE-7337

Nikolay Izhikov, thanks for the contribution and for all the hard work!

-Val


Re: SQL Schema Updates using Annotations

2017-12-29 Thread vkulichenko
Cody,

If you want to dynamically change object schema, then you should avoid
deploying classes on server nodes. Ignite binary format [1] will make sure
that the change happens transparently, so you don't even need to perform
rolling upgrade.

However, this will not affect SQL schema, i.e. will not make a new field
queryable or indexed. To achieve that you can use ALTER TABLE command [2].
Actually, if you have a SQL use case, I would recommend to create schema via
DDL in the first place, and not use POJOs at all. That can significantly
simplify the usage.

[1] https://apacheignite.readme.io/docs/binary-marshaller
[2] https://apacheignite-sql.readme.io/docs/ddl

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark data frames integration merged

2017-12-29 Thread Denis Magda
Great news,

Thanks Nikolay and Val!

Nikolay, could you document the feature before the release [1]? I’ve granted 
you required permission.

More on the doc process can be found here [2].

[1] https://issues.apache.org/jira/browse/IGNITE-7345 

[2] https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document 


—
Denis

> On Dec 29, 2017, at 1:22 PM, Valentin Kulichenko 
>  wrote:
> 
> Igniters,
> 
> Great news! We completed and merged first part of integration with Spark data 
> frames [1]. It contains implementation of Spark data source which allows to 
> use DataFrame API to query Ignite data, as well as join it with other data 
> frames originated from different sources.
> 
> Next planned steps are the following:
> - Implement custom execution strategy to avoid transferring data from Ignite 
> to Spark when possible [2]. This should give serious performance improvement 
> in cases when only Ignite tables participate in a query.
> - Implement ability to save a data frame into Ignite via DataFrameWrite API 
> [3].
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-3084 
> 
> [2] https://issues.apache.org/jira/browse/IGNITE-7077 
> 
> [3] https://issues.apache.org/jira/browse/IGNITE-7337 
> 
> 
> Nikolay Izhikov, thanks for the contribution and for all the hard work!
> 
> -Val



Re: SQL Schema Updates using Annotations

2017-12-29 Thread Cody Yancey
Thank you for your quick response!

I'm actually using Ignite as an embedded shared-memory fabric for a
multi-instance HTTP service. The vast majority of the business logic for
this app is done through puts, gets and per-partition scan queries, for
which we heavily leverage the POJOs. Still, the SQL layer is super useful
for our users to quickly create dashboards for tracking purposes. Our
deployment is already heavily automated, and I was hoping to figure out a
way of keeping the SQL schema in sync with the POJO schema in a similarly
automated fashion. Is ALTER TABLE ADD COLUMN even supported in v2.1?

Thanks,
Cody

On Fri, Dec 29, 2017 at 2:34 PM, vkulichenko 
wrote:

> Cody,
>
> If you want to dynamically change object schema, then you should avoid
> deploying classes on server nodes. Ignite binary format [1] will make sure
> that the change happens transparently, so you don't even need to perform
> rolling upgrade.
>
> However, this will not affect SQL schema, i.e. will not make a new field
> queryable or indexed. To achieve that you can use ALTER TABLE command [2].
> Actually, if you have a SQL use case, I would recommend to create schema
> via
> DDL in the first place, and not use POJOs at all. That can significantly
> simplify the usage.
>
> [1] https://apacheignite.readme.io/docs/binary-marshaller
> [2] https://apacheignite-sql.readme.io/docs/ddl
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL Schema Updates using Annotations

2017-12-29 Thread vkulichenko
No, it was added in 2.3.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark data frames integration merged

2017-12-29 Thread Nikolay Izhikov
Thank you, guys.

Val, thanks for all reviews, advices and patience.

Anton, thanks for ignite wisdom you share with me.

Looking forward for next issues :)

P.S Happy New Year for all Ignite community!

В Пт, 29/12/2017 в 13:22 -0800, Valentin Kulichenko пишет:
> Igniters,
> 
> Great news! We completed and merged first part of integration with
> Spark data frames [1]. It contains implementation of Spark data
> source which allows to use DataFrame API to query Ignite data, as
> well as join it with other data frames originated from different
> sources.
> 
> Next planned steps are the following:
> - Implement custom execution strategy to avoid transferring data from
> Ignite to Spark when possible [2]. This should give serious
> performance improvement in cases when only Ignite tables participate
> in a query.
> - Implement ability to save a data frame into Ignite via
> DataFrameWrite API [3].
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-3084
> [2] https://issues.apache.org/jira/browse/IGNITE-7077
> [3] https://issues.apache.org/jira/browse/IGNITE-7337
> 
> Nikolay Izhikov, thanks for the contribution and for all the hard
> work!
> 
> -Val