I have another question. Could we create Rdd in spark separately frim
ignite or we must create rdd from ignite and share with spark. In other
words, is it possible create spark and ignite Rdd independently?
On Wednesday, December 26, 2018, aealexsandrov
wrote:
> Hi,
>
> The main difference
Or is it necessary to use the OS to enforce this? For example using ulimit -d
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I see.
Anyway, try the new image, it is more properly built:
contains published files instead of source code, and includes only runtime
(not sdk/jdk).
Thanks,
Pavel
On Fri, Jan 4, 2019 at 7:13 PM F.D. wrote:
> Yes I agree with you. The proxy is needed because in my office is present
> a
Thank you - I will look at this example and write back what I find
-Original Message-
From: akurbanov
Sent: Friday, January 4, 2019 10:56 AM
To: user@ignite.apache.org
Subject: Re: sql table names and annotations
As far as I know, there is currently no way to set custom table name
As far as I know, there is currently no way to set custom table name using
annotation-based approach.
Currently it is possible to do using QueryEntity-based approach, take a look
at this example which shows how to create table metadata using QueryEntities
in CacheConfiguration:
Yes I agree with you. The proxy is needed because in my office is present a
firewall. But from my host can use nuget without problems, I got errors
only in docker.
Thanks,
F.D.
On Fri, Jan 4, 2019 at 4:30 PM Pavel Tupitsyn wrote:
> Looks like you have some network issues and NuGet
Denis, All
First of all best wishes for the New Year.
We understand the limitations concerning data that is stored in caches.
But I feel it's quite limiting to apply these restrictions as well to
runnables, it really makes dealing with changing code in runnables very
annoying. And it prevents 2
Looks like you have some network issues and NuGet repository can not be
accessed.
Can you describe your environment? Why is the proxy needed?
Also I've built and pushed Ignite.NET docker image to my personal hub,
maybe this helps?
*docker run ptupitsyn/ignite:ignite-net*
Source code:
Can someone please help me with this?
On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao Hi
>
> After upgrading to 2.7 version I am getting following exception. I am
> executing a SELECT sql inside optimistic transaction with serialization
> isolation level.
>
> 1) Has anything changed from 2.6 to 2.7
Hi Prashant
Exactly when are you getting the error you have attached, are you getting
this error while starting the kafka stand alone connector
I guess you may have to do the below
Copy the following files from the $IGNITE_HOME/optional/ignite-kafka
directory to $KAFKA_-
HOME/libs directory.
Thanks for everyone's feedback regarding the capacity planning question. My
main objective here is to size my servers accordingly and keep the related
data on the same server, as much as possible. It seems a custom affinity
function that takes into account the server classes (defined based on
You’re right, data needs to be loaded into Ignite before you can use its more
efficient SQL engine from Spark.
You can certainly load the data in using Spark as you describe. That’s probably
the easiest, least code, way of doing it. If there’s a lot it, it may be more
efficient to load the
Regarding your question on capacity planning
Not sure we have any work around to have data equally getting distributed to
fulfill your requirement technically and do the sizing. But non-technically,
you can change your design to include state as well as part of your affinity
key along with the
13 matches
Mail list logo