Apache Ignite Hackathon: Open Source contribution is simple

2018-08-24 Thread Dmitriy Pavlov
Hi Ignite Users, Developers, and Enthusiasts,

It's natural to assume that a newcomer has little to offer the Apache
Ignite. However, you can be surprised at how much each newcomer can help,
even now.

I would propose to run a hackathon on the next Apache Ignite meetup. In
parallel with talks, an attendee can pick up a simple ticket and run full
patch submission process during meetup and make open source contribution.
Ignite experts will be there and will be able to help everyone interested.

The first place to run - Ignite meetup in Saint Petersburg, Russia, because
I know that several committers live there.

- If you're a user or a contributor, would you like to join such activity?
- If you're a committer, will you be able to come and help with review and
merge?
- Would you propose a simple ticket(s) which can be done in one hour or
even several minutes?

Any feedback is very welcome.

Sincerely,
Dmitriy Pavlov


Cross plat form put/get

2018-08-24 Thread wengyao04
Hi, we have c++ ignite client and java client to read/write to the same
cache. 
Our key is a composite object, my cache is  IgniteCache

---c++ ---
namespace ignite
{
namespace binary
{
struct Key
{
std::string id;
bool flag;
Key()
: id() , flag(false)
{
}

Key(const std::string& id, bool flag = false)
: id(id)
, flag(flag)
{
}

Key(const  Key& rhs)
: id(rhs.id)
, flag(rhs.flag)
{
}

bool operator<(const Key& rhs) const
{
return (id == rhs.id) ?
  flag < rhs.flag : id < rhs.id;
}

bool operator==(const Key& rhs) const
{
return id == rhs.id && flag == rhs.flag;
}

bool operator!=(const Key& rhs) const
{
return !(*this == rhs);
}

~Key()
{
}

// METHODS
bool isEmpty() const { return id.empty(); }
};
template<>
struct BinaryType
{
static int32_t GetTypeId()
{
return GetBinaryStringHashCode("Key");
}

static void GetTypeName(native_std::string& dst)
{
dst = "Key";
}

static int32_t GetFieldId(const char* name)
{
return GetBinaryStringHashCode(name);
}

static bool IsNull(const Key& key)
{
return und.isEmpty();
}

static void GetNull(Key& key)
{
key = Key();
}

static void Write(BinaryWriter& writer, Key& key)
{
writer.WriteString("id", key.id);
writer.WriteBool("flag", key.flag);
}

static void Read(BinaryReader& reader, Key& key)
{
key.id = reader.ReadString("id");
key.flag = reader.ReadBool("flag");
}

int32_t GetHashCode(const Key& key) const
{
const int32_t prime = 31;
int32_t boolHashCode = key.is_delayed ? 1231 : 1237;
int32_t strHashCode = 0;
for (int i = 0; i < (int)key.bbgid.size(); ++i)
{
strHashCode = prime * strHashCode + key.bbgid[i];
}
return strHashCode + prime * boolHashCode;
}
};
}
}

Our java key is
---
mport org.apache.ignite.binary.BinaryObjectException;
import org.apache.ignite.binary.BinaryReader;
import org.apache.ignite.binary.BinaryWriter;
import org.apache.ignite.binary.Binarylizable;

public class Key implements Binarylizable {

public String id;
public Boolean flag;
private final static int prime = 31;

public Key() {
//no-op
}

public Key(String id) {
this.id = id;
this.flag = false;
}

public Key(String id, boolean flag) {
this.id = id;
this.flag = flag;
}

@Override
public int hashCode() {

int boolHashCode = flag ? 1231 : 1237;
int strHashCode = 0;

for (int i = 0; i < id.length(); i++) {
strHashCode = prime * strHashCode + id.charAt(i);
}

return strHashCode + prime * boolHashCode;
}

@Override
public boolean equals(Object o) {
if (o == this) {
return true;
}

if (!(o instanceof Key)) {
return false;
}

Key cast = (Key) o;
return this.id.equals(cast.id) && this.flag == cast.flag;
}

public void readBinary(BinaryReader reader) throws BinaryObjectException
{
id = reader.readString("id");
flag = reader.readBoolean("flag");
}

public void writeBinary(BinaryWriter writer) throws
BinaryObjectException {
writer.writeString("id", id);
writer.writeBoolean("flag", flag);
}

@Override public String toString() {
return "Key = [" + id + " " + flag +  "]\n";
}
}

I see people put the  GetHashCode in c++ BinaryType, some one use static,
and others just put it as a const method. This GetHashCode implements hash
code as java class Key. Where this GetHashCode is used ? Say if I write a
record from c++ client, is this key the cache hash key ? Do I also need to
implement hash in c++ class Key ? Is this GetHhashCode needed to be static ?

How does ignite ensure cross plat form read/write when key is not primitive
object (a composite object) ?
Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cache Configuration Templates

2018-08-24 Thread Dave Harvey
I found what I've read in this area confusing, and here is my current
understanding.

When creating an IgniteConfiguration in Java or XML, I can specify the
property cacheConfiguration, which is an array of CacheConfigurations.
This causes Ignite to preserve these configurations, but this will not
cause Ignite to create a cache. If I call Ignite.getOrCreateCache(string),
if there is an existing cache, I will get that, otherwise a new cache will
be created using that configuration.

It seems like creating a cache with a configuration will add to this list,
because Ignite.configuration.getCacheConfiguration() returns all caches.

I can later call Ignite.addCacheConfiguration(). This will add a template
to that list.

Questions:
1)  what happens if there are entries with duplicate names on
IgniteConfiguration.setCacheConfiguration()   when this is used to create a
grid?
2) There was mention in one e-mail talking about a convention where
templates have "*" in their name?
3) What happens if addCacheConfiguration() tries to add a duplicate name?
4) Is a template simply a cache that not fully instantiated?
5) What about template persistence?   Are they persisted if they specify a
region that is persistent?
6) My use case is that I want to create caches based some default for the
cluster, so in Java I would like to construct the new configuration from
the a template of a known name.   So far, I can only see that I can call
Ignite.configuration.getCacheConfiguration() and then search the array for
a matching name.   Is there a better way?

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: ScanQuery predicate serialization

2018-08-24 Thread vkulichenko
Serialization of the filter must happen, because it's invoked on server nodes
and therefore needs to be instantiated there. However, peer class loading
can be used for production. If you see this callout in the documentation,
you're probably looking at some old version of it.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ignte cluster hang with GridCachePartitionExchangeManager

2018-08-24 Thread wangsan
Now my cluster topology is Node a,b,c,d  all with persistence enable and
peerclassloader false. b c d have different class(cache b) from a.
1.When any node crash with oom(memory or stack) .all nodes hang with " -
Still waiting for initial partition map exchange "
2.When a start first,  b,c,d start in multi threads concurrent.b,c,d hang
with " - Still waiting for initial partition map exchange ".a hang with
"Unable to await partitions release latch"





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Thin client vs client node performance in Spark

2018-08-24 Thread eugene miretsky
So I decreased the number of spark executors to 2, and the problem went
away.
However, what's the general guidline about number of nodes/clients that can
write to the cluster at the same time?
1) How does one increase write throuput without increasing number of
clients (the server nodes are underutilized at the moment)
2) We have use cases where we many have many clients writing from different
sources

Cheers,
Eugene

On Fri, Aug 24, 2018 at 11:51 AM, eugene miretsky  wrote:

> Attached is the error I get from ignitevisorcmd.sh after calling the cache
> command (the command just hangs).
> To me it looks like all the spark executrors (10 in my test) start a new
> client node, and some of those nodes get terminated and restarted as the
> executor die. This seems to really confuse Ignite.
>
> [15:45:10,741][INFO][grid-nio-worker-tcp-comm-0-#23%console%][TcpCommunicationSpi]
> Established outgoing communication connection [locAddr=/127.0.0.1:40984,
> rmtAddr=/127.0.0.1:47101]
>
> [15:45:10,741][INFO][grid-nio-worker-tcp-comm-1-#24%console%][TcpCommunicationSpi]
> Established outgoing communication connection [locAddr=/127.0.0.1:49872,
> rmtAddr=/127.0.0.1:47100]
>
> [15:45:10,742][INFO][grid-nio-worker-tcp-comm-3-#26%console%][TcpCommunicationSpi]
> Established outgoing communication connection [locAddr=/127.0.0.1:40988,
> rmtAddr=/127.0.0.1:47101]
>
> [15:45:10,743][INFO][grid-nio-worker-tcp-comm-1-#24%console%][TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/127.0.0.1:47101,
> rmtAddr=/127.0.0.1:40992]
>
> [15:45:10,745][INFO][grid-nio-worker-tcp-comm-0-#23%console%][TcpCommunicationSpi]
> Established outgoing communication connection [locAddr=/127.0.0.1:49876,
> rmtAddr=/127.0.0.1:47100]
>
> [15:45:11,725][SEVERE][grid-nio-worker-tcp-comm-2-#25%console%][TcpCommunicationSpi]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
> bytesRcvd=180, bytesSent=18, bytesRcvd0=18, bytesSent0=0, select=true,
> super=GridWorker [name=grid-nio-worker-tcp-comm-2,
> igniteInstanceName=console, finished=false, hashCode=1827979135,
> interrupted=false, runner=grid-nio-worker-tcp-comm-2-#25%console%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=166400 cap=166400],
> readBuf=java.nio.DirectByteBuffer[pos=18 lim=18 cap=117948],
> inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
> 172.21.85.37:39942, rmtAddr=ip-172-21-85-213.ap-south-1.compute.internal/
> 172.21.85.213:47100, createTime=1535125510724, closeTime=0, bytesSent=0,
> bytesRcvd=18, bytesSent0=0, bytesRcvd0=18, sndSchedTime=1535125510724,
> lastSndTime=1535125510724, lastRcvTime=1535125510724, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@7ae6182a, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=false]]]
>
> java.lang.NullPointerException
>
> at org.apache.ignite.internal.util.nio.GridNioServer.
> cancelConnect(GridNioServer.java:885)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.cancel(
> TcpCommunicationConnectionCheckFuture.java:338)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture
> .cancelFutures(TcpCommunicationConnectionCheckFuture.java:475)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture
> .receivedAddressStatus(TcpCommunicationConnectionCheckFuture.java:494)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture
> $1.onStatusReceived(TcpCommunicationConnectionCheckFuture.java:433)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.finish(
> TcpCommunicationConnectionCheckFuture.java:362)
>
> at org.apache.ignite.spi.communication.tcp.internal.
> TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.
> onConnected(TcpCommunicationConnectionCheckFuture.java:348)
>
> at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.
> onMessage(TcpCommunicationSpi.java:773)
>
> at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.
> onMessage(TcpCommunicationSpi.java:383)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterChain$
> TailFilter.onMessageReceived(GridNioFilterChain.java:279)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.
> proceedMessageReceived(GridNioFilterAdapter.java:109)
>
> at org.apache.ignite.internal.util.nio.GridNioCodecFilter.
> onMessageReceived(GridNioCodecFilter.java:117)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.
> proceedMessageReceived(GridNioFilterAdapter.java:109)
>

Re: Thin client vs client node performance in Spark

2018-08-24 Thread eugene miretsky
Attached is the error I get from ignitevisorcmd.sh after calling the cache
command (the command just hangs).
To me it looks like all the spark executrors (10 in my test) start a new
client node, and some of those nodes get terminated and restarted as the
executor die. This seems to really confuse Ignite.

[15:45:10,741][INFO][grid-nio-worker-tcp-comm-0-#23%console%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/127.0.0.1:40984,
rmtAddr=/127.0.0.1:47101]

[15:45:10,741][INFO][grid-nio-worker-tcp-comm-1-#24%console%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/127.0.0.1:49872,
rmtAddr=/127.0.0.1:47100]

[15:45:10,742][INFO][grid-nio-worker-tcp-comm-3-#26%console%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/127.0.0.1:40988,
rmtAddr=/127.0.0.1:47101]

[15:45:10,743][INFO][grid-nio-worker-tcp-comm-1-#24%console%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/127.0.0.1:47101,
rmtAddr=/127.0.0.1:40992]

[15:45:10,745][INFO][grid-nio-worker-tcp-comm-0-#23%console%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/127.0.0.1:49876,
rmtAddr=/127.0.0.1:47100]

[15:45:11,725][SEVERE][grid-nio-worker-tcp-comm-2-#25%console%][TcpCommunicationSpi]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
bytesRcvd=180, bytesSent=18, bytesRcvd0=18, bytesSent0=0, select=true,
super=GridWorker [name=grid-nio-worker-tcp-comm-2,
igniteInstanceName=console, finished=false, hashCode=1827979135,
interrupted=false, runner=grid-nio-worker-tcp-comm-2-#25%console%]]],
writeBuf=java.nio.DirectByteBuffer[pos=0 lim=166400 cap=166400],
readBuf=java.nio.DirectByteBuffer[pos=18 lim=18 cap=117948],
inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
172.21.85.37:39942, rmtAddr=ip-172-21-85-213.ap-south-1.compute.internal/
172.21.85.213:47100, createTime=1535125510724, closeTime=0, bytesSent=0,
bytesRcvd=18, bytesSent0=0, bytesRcvd0=18, sndSchedTime=1535125510724,
lastSndTime=1535125510724, lastRcvTime=1535125510724, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=o.a.i.i.util.nio.GridDirectParser@7ae6182a, directMode=true],
GridConnectionBytesVerifyFilter], accepted=false]]]

java.lang.NullPointerException

at
org.apache.ignite.internal.util.nio.GridNioServer.cancelConnect(GridNioServer.java:885)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.cancel(TcpCommunicationConnectionCheckFuture.java:338)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture.cancelFutures(TcpCommunicationConnectionCheckFuture.java:475)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture.receivedAddressStatus(TcpCommunicationConnectionCheckFuture.java:494)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$MultipleAddressesConnectFuture$1.onStatusReceived(TcpCommunicationConnectionCheckFuture.java:433)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.finish(TcpCommunicationConnectionCheckFuture.java:362)

at
org.apache.ignite.spi.communication.tcp.internal.TcpCommunicationConnectionCheckFuture$SingleAddressConnectFuture.onConnected(TcpCommunicationConnectionCheckFuture.java:348)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:773)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:383)

at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)

at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

at
org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:117)

at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

at
org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:88)

at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

at
org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)


On Fri, Aug 24, 2018 at 11:18 AM, eugene miretsky  wrote:

>  Thanks,
>
> So the way I understand it, thick client will use the affinitly key to
> send data to the right node, and hence will split the traiffic between all
> the nodes, the thin client will just send the data to one 

ScanQuery predicate serialization

2018-08-24 Thread route99
Hello, 

is there a way to execute a ScanQuery with the Predicate not being
serialized?
Otherwise it requires the class which implements the IgniteBiPredicate
interface or the class where
the scan query is executed (with a lambda passed in as the predicate) to be
on the classpath
of the ignite node.

>From documentation/forum I have seen only two ways to avoid
serialization/deserialization problems:
1. enabling PeerClassLoading (which is not recommended for production as
documentation says)
2. put a jar with required classes in the lib folder

It would mean some additional effort for the deployment at the end.
Did I miss something, do I something in the wrong way  or is there maybe any
another approach?

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Thin client vs client node performance in Spark

2018-08-24 Thread eugene miretsky
 Thanks,

So the way I understand it, thick client will use the affinitly key to send
data to the right node, and hence will split the traiffic between all the
nodes, the thin client will just send the data to one node, and that node
will be responsible to send it to the actual node that owns the 'shard'?

I keep getting the following error when using the Spark driver, the driver
keeps writing, but very slowly. Any idea what is causing the error, or how
to fix it?

Cheers,
Eugene

"

[15:04:58,030][SEVERE][data-streamer-stripe-10-#43%Server%][DataStreamProcessor]
Failed to respond to node [nodeId=78af5d88-cbfa-4529-aaee-ff4982985cdf,
res=DataStreamerResponse [reqId=192, forceLocDep=true]]

class org.apache.ignite.IgniteCheckedException: Failed to send message
(node may have left the grid or TCP connection cannot be established due to
firewall issues) [node=ZookeeperClusterNode
[id=78af5d88-cbfa-4529-aaee-ff4982985cdf, addrs=[127.0.0.1], order=377,
loc=false, client=true], topic=T1 [topic=TOPIC_DATASTREAM,
id=b8d675c6561-78af5d88-cbfa-4529-aaee-ff4982985cdf],
msg=DataStreamerResponse [reqId=192, forceLocDep=true], policy=9]

at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1651)

at
org.apache.ignite.internal.managers.communication.GridIoManager.sendToCustomTopic(GridIoManager.java:1703)

at
org.apache.ignite.internal.managers.communication.GridIoManager.sendToCustomTopic(GridIoManager.java:1673)

at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.sendResponse(DataStreamProcessor.java:440)

at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:402)

at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)

at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)

at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)

at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)

at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)

at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)

at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)

at
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)

at java.lang.Thread.run(Thread.java:748)

Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to send
message to remote node: ZookeeperClusterNode
[id=78af5d88-cbfa-4529-aaee-ff4982985cdf, addrs=[127.0.0.1], order=377,
loc=false, client=true]

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2718)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2651)

at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1643)

... 13 more

Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
connect to node (is node still alive?). Make sure that each ComputeTask and
cache Transaction has a timeout set in order to prevent parties from
waiting forever in case of network issues
[nodeId=78af5d88-cbfa-4529-aaee-ff4982985cdf, addrs=[/127.0.0.1:47101]]

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3422)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2958)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2841)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2692)

... 15 more

Suppressed: class org.apache.ignite.IgniteCheckedException: Failed
to connect to address [addr=/127.0.0.1:47101, err=Connection refused]

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3425)

... 18 more

Caused by: java.net.ConnectException: Connection refused

at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)

at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)

at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3262)

... 18 more

"

On Tue, Aug 14, 2018 at 4:39 PM, akurbanov  wrote:

> Hi,
>
> Spark integration was implemented before java thin c

Re: Possible issue with Web Console

2018-08-24 Thread Stanislav Lukyanov
This is the commit https://github.com/apache/ignite/commit/bab61f1.
Fixed in 2.7.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recommended HW on AWS EC2 - vertical vs horizontal scaling

2018-08-24 Thread eugene miretsky
Thanks Andrei,

For user case, please see my email ("Data modeling for segmenting a huge
data set: precomputing vs real time computations").

I think our main confusion right now is trying to understand how exactly
SQL queries work (when memory is moved to heap, when/how is H2 used, how
the reduce step is performed, etc.), because of that we don't really
understand when data is moved between heap, off-heap and disk, and hence
have hard time sizing it properly (I have crushed Ignite many times during
testing).  There is already a simialr email thread ("How much heap to
allocate").

We also cannot get some of our OLAP queries to execute in parallel (see
"Slow SQL query uses only a single CPU"), which again makes it harder to
size the HW (no point using huge instances if only a single CPU is going to
be used per query).

Cheers,
Eugene


Furher

On Fri, Aug 24, 2018 at 6:16 AM, aealexsandrov 
wrote:

> Hi,
>
> Ignite doesn't have such kind of benchmarks because they are very specific
> for every case and setup.
>
> However, exists several common tips:
>
> 1)In case if you will use EBS then try to avoid the NVMe. It is fast but
> looks like doesn't provide the guarantees for saving your data. We face the
> corruption of the working directory on this type of devices.
> 2)To get the best performance you should have enough of RAM to store your
> data in Ignite off-heap
> 3)Volume Type - EBS Provisioned IOPS SSD (io1)
>
> I suggest using x1 or x1e instances from
> https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/
> memory-optimized-instances.html
> list.
>
> Your choice will depend on your case and expectations. But for example:
>
> x1e.32xlarge
> ESB = io1
> 2 disks with 2 TB each
>
> It will provide to the capability to store your data in the memory in one
> node and the disk speed will be around 14000MB/SEC.
>
> Is it possible to describe your case in more detail?
>
> BR,
> Andrei
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How much heap to allocate

2018-08-24 Thread eugene miretsky
Thanks!

I am trying to understand when and how data is moved from off-heap to on
heap, particularly when using SQL.  I took a look at the wiki

but
still have a few questions

My understanding is that data is always store off-heap

1) In what format is data stored off heap?
2) What happens when a SQL query is executed, in particular

   - How is H2 used? How is data loaded in H2? What if some of the  data is
   on disk?
   - When is data loaded into heap, and how much? Is only the output of H2
   loaded, or everything?
   - How is the reduce stage performed? Is it performed only on one node
   (hence that node needs to load all the data into memory)

3) What happens when Ingite runs out of memory during execution? Is data
evictied to disk (if persistence is enabled)?
4) Based on the code, it looks like I need to set my data region size to at
most 50% of available memory (to avoid the warning), this seems a bit
wastefull.
5) Do you have any general advice on benchmarking the memory requirpement?
So far I have not been able to find a way to check how much memory each
table takes on and off heap, and how much memory each query takes.

Cheers,
Eugene

On Fri, Aug 24, 2018 at 8:06 AM, NSAmelchev  wrote:

> Hi Eugene,
>
> Yes, it's a misprint as Dmitry wrote.
>
> Ignite print this warning if nodes on local machine require more than 80%
> of
> physical RAM.
>
> From code, you can see that total heap/offheap memory summing
> from nodes having the same mac address. This way calculates total memory
> used
> by the local machine.
>
> --
> Best wishes,
> Amelchev Nikita
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How much heap to allocate

2018-08-24 Thread NSAmelchev
Hi Eugene,

Yes, it's a misprint as Dmitry wrote.

Ignite print this warning if nodes on local machine require more than 80% of
physical RAM.

>From code, you can see that total heap/offheap memory summing
from nodes having the same mac address. This way calculates total memory
used
by the local machine. 

-- 
Best wishes,
Amelchev Nikita



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CTE and Recursive CTE support in SQL

2018-08-24 Thread Ilya Kasnacheev
Hello!

Try it and see. With regards of Recursive CTE, I think the development
priority is MVCC for now.

As for H2 features, there might be further limitations on them in Apache
Ignite.

Regards,

-- 
Ilya Kasnacheev

2018-08-24 10:34 GMT+03:00 piyush :

> H2 has limited support for Recursive CTE.
>
> http://h2database.com/html/advanced.html#recursive_queries
>
> Does Ignite supports all SQL features which H2 has ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How much heap to allocate

2018-08-24 Thread Dmitriy Pavlov
Hi Eugene,

Misprint was corrected in https://issues.apache.org/jira/browse/IGNITE-7824 and
will be released as part of 2.7.

Even you don't have on-heap caches, entries during reading will be anyway
unmarshalled from off-heap to on-heap. So right choice of -Xmx has a high
dependence to particular application scenario.

Sincerely,
Dmitriy Pavlov

ср, 22 авг. 2018 г. в 21:30, eugene miretsky :

> Hi,
>
> I am getting the following warning when starting Ignite - "
>
> Nodes started on local machine require more than 20% of physical RAM what
> can lead to significant slowdown due to swapping
> "
>
> The 20% is a typo in version 2.5, it should be 80%.
>
> We have increased the max size of the default region to 70% of the
> available memory on the instance (since that's the only region we use at
> the moment).
>
> From reading the code
> 
>  that
> generates the error, it seems like
> 1) Ignite adds all the memory across all nodes to check if it is above the
> safeToUse threshold. I would expect the check to be done per node
> 2) totalOffheap seems to be the sum of the maxSizes of all regions, and
> totalHeap retrieved from the JVM configs. ingnite.sh sets  -Xmx200g.
>
> Assuming we are not enabling on-heap caching, what should we set the heap
> size to?
>
> Cheers,
> Eugene
>


Re: Recommended HW on AWS EC2 - vertical vs horizontal scaling

2018-08-24 Thread aealexsandrov
Hi,

Ignite doesn't have such kind of benchmarks because they are very specific
for every case and setup.

However, exists several common tips:

1)In case if you will use EBS then try to avoid the NVMe. It is fast but
looks like doesn't provide the guarantees for saving your data. We face the
corruption of the working directory on this type of devices.  
2)To get the best performance you should have enough of RAM to store your
data in Ignite off-heap
3)Volume Type - EBS Provisioned IOPS SSD (io1)

I suggest using x1 or x1e instances from
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/memory-optimized-instances.html
list.

Your choice will depend on your case and expectations. But for example:

x1e.32xlarge
ESB = io1
2 disks with 2 TB each

It will provide to the capability to store your data in the memory in one
node and the disk speed will be around 14000MB/SEC.

Is it possible to describe your case in more detail?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity key in SQL execution

2018-08-24 Thread Prasad Bhalerao
Can someone please reply to this?

On Thu, Aug 23, 2018, 8:38 PM Prasad Bhalerao 
wrote:

> Problem is resolved now. The index type setting in cache configuration was
> incorrect. After setting DefaultDataAffinityKey.class in cache config index
> type query executed successfully.
>
> New sql is as follows. I have few questions on it.
>
> 1) Execution plan is showing 2 sqls. What does second sql (highlighted)
> indicate ?  What it merge scan?
>
> 2) The sql is using index on unitid but it not using any index for
> condition (AG__Z0.USERID = UAD__Z1.USERID).
> I tried to use index hints : USE INDEX(user_account_idx2,
> asset_group_idx2) but I got error Index "USER_ACCOUNT_IDX2" not found.
> Index user_account_idx2 is present in UserAccountData class.
>
> How can I make use of index created on userId and unitId in this case?
>
> 3) Can I create index on affinityId?
>
> 4) After making affinityId change, how can I check that SQL is going the
> single node only?
>
> 5) Do I need to include extra condition " ag.affinityId = uad.affinityId"
> in JOIN ON clause?
>
> 6) If I am using sub queries or join queries, is it necessary to write
> affinity key condition for each Cache where caluse?
>
>
>
> SELECT ag.assetGroupId,
>   ag.name
> FROM AssetGroupData ag
> JOIN USER_ACCOUNT_CACHE.UserAccountData uad
> ON (ag.userId   = uad.userId)
> WHERE ag.affinityId = ?
> AND uad.unitId  = ?
> AND uad.userRole= 1
>
> Execution Plan:
>
> SELECT
> AG__Z0.ASSETGROUPID AS __C0_0,
> AG__Z0.NAME  AS __C0_1
> FROM USER_ACCOUNT_CACHE.USERACCOUNTDATA UAD__Z1
> /* *USER_ACCOUNT_CACHE.USER_ACCOUNT_IDX2: UNITID = ?2* */
> /* WHERE (UAD__Z1.USERROLE = 83)
> AND (UAD__Z1.UNITID = ?2)
> */
> INNER JOIN ASSET_GROUP_CACHE.ASSETGROUPDATA AG__Z0
> /* ASSET_GROUP_CACHE.AFFINITY_KEY: AFFINITYID = ?1 */
> ON 1=1
> WHERE (AG__Z0.USERID = UAD__Z1.USERID)
> AND ((UAD__Z1.USERROLE = 83)
> AND ((AG__Z0.AFFINITYID = ?1)
> AND (UAD__Z1.UNITID = ?2)))
>
> SELECT
> __C0_0 AS ASSETGROUPID,
> __C0_1 AS NAME
> FROM PUBLIC.__T0
> /* ASSET_GROUP_CACHE."merge_scan" */
>
>
>
> Thanks,
> Prasad
>
>>
>> On Thu, Aug 23, 2018 at 2:31 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I tried your suggestion but I am getting query parsing error now. I am
>>> attaching my data and data key class in this mail. Could you please help me
>>> out?
>>>
>>> SQL :  select assetGroupId, name from AssetGroupData where affinityId =
>>> ?
>>>
>>> *Exception:*
>>> javax.cache.CacheException: Failed to parse query. Column "AFFINITYID"
>>> not found; SQL statement:
>>> explain select assetGroupId, name from AssetGroupData where affinityId =
>>> ?  [42122-196]
>>>  at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:676)
>>>  at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:615)
>>>  at
>>> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:356)
>>>
>>> Test code I am using to push data to cache:
>>>
>>> private void pushData(String cacheName,List datas){
>>>
>>>   final IgniteCache cache = ignite
>>>   .cache(cacheName);
>>>   for (Data data : datas) {
>>> cache.put(data.getKey(), data);
>>>   }
>>> }
>>>
>>>
>>>
>>>
>>> Thanks,
>>> Prasad
>>>
>>> On Wed, Aug 22, 2018 at 10:18 PM Prasad Bhalerao <
>>> prasadbhalerao1...@gmail.com> wrote:
>>>
 Ok, I tried to write generic impl to use the same key class with
 different caches . That's why kept the name affinityId. The reason I am not
 getting error is I have the subscriptionId in Data(value) class as well.

 So it means the affinity key field name matters. I was thinking/trying
 to map the affinity column name "subscriptionId" to field "affinityId"
 without keeping the field name same. Was looking in wrong direction.

 Thanks,
 Prasad


 Thanks,
 Prasad


 On Wed, Aug 22, 2018, 9:29 PM vkulichenko <
 valentin.kuliche...@gmail.com> wrote:

> Prasad,
>
> In this case using subscriptionId in query would be a syntax error,
> because
> the name of the field is affinityId. If you use affinityId, however,
> Ignite
> will route the query to a single node. It knows that it's affinity key
> based
> on @AffinityKeyMapped annotation.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


 On Wed, Aug 22, 2018, 9:29 PM vkulichenko <
 valentin.kuliche...@gmail.com> wrote:

> Prasad,
>
> In this case using subscriptionId in query would be a syntax error,
> because
> the name of the field is affinityId. If you use affinityId, however,
> Ignite
> will route the query to a single node. It knows that it's affinity key
> based
> on @Affinit

Re: Data index loss problem

2018-08-24 Thread Stanislav Lukyanov
What's your version?
Do you use native persistence?

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unix timestamp conversion problem

2018-08-24 Thread ilya.kasnacheev
Hello!

Can you share a reproducer for this problem?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CTE and Recursive CTE support in SQL

2018-08-24 Thread piyush
H2 has limited support for Recursive CTE.

http://h2database.com/html/advanced.html#recursive_queries

Does Ignite supports all SQL features which H2 has ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CTE and Recursive CTE support in SQL

2018-08-24 Thread piyush
When is Recursive CTE planned ? 

Recursive CTE is super powerful.

https://www.postgresql.org/docs/current/static/queries-with.html

https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-2017

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/