On Mon, Sep 11, 2017 at 6:54 PM, aa...@tophold.com
wrote:
> Thanks Alexey! what we real want, we deploy service on each Cache Node.
> those service will use data from its' local cache.
>
> Client will call those remote service, Client should only call the
> service on
Hello...
I have created a cache and querying it via REST API with the below
http://xx.xx.xx.xx:
/ignite?cmd=qryexe=10=%22xyz-1%22qry=serial_number+%3D+%3F
Response is as below
{
"successStatus": 1,
"error": "Ouch! Argument cannot be null: sql",
"sessionToken": null,
Thanks Alexey! what we real want, we deploy service on each Cache Node.
those service will use data from its' local cache.
Client will call those remote service, Client should only call the service on
primary node, this make those nodes work like master-slave mode automatically.
Not
I updated the version and so far, I have not faced the issue. So, looks like
the issue has been fixed in 2.1. But we still trying to figure out why
segmentation happens so frequently. Would like to continue the discussion to
resolve the cause of segmentation.
We are using ignite for
Hi Edward,
I can't reproduce the problem, could you please share a working code snippet
that will show the problem?
Thanks,
Mikhail.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Please see the example in doc:
https://apacheignite.readme.io/docs/miscellaneous-features#section-custom-sql-functions
you can only apply custom functions to fields, that is it.
You can just call arbitrary code with CALL statement.
Thanks,
Mike.
--
Sent from:
Hi Ender,
I will appreciate if you share a reproducer with us, so I can file a ticket
for this problem.
Thanks,
Mikhail.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
with new dependencies, you have the same exception, don't you? if so, could
you please share a pom based project that shows the problem?
Thanks,
Mikhail.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi.
I have a server node that creates an IgniteQueue:
ignite.queue("queueName", 0, new CollectionConfiguration());
Client node tries to receive queue:
IgniteConfiguration.setClientMode(true);
IgniteQueue queueOnClientNode = ignite.queue("queueName", 0, null);
queueOnClientNode is always
That's exactly what I am talking about. You have col10 after col4, because of
this you are inserting nulls into col5
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Kenan,
after col4 you have col10 instead of col5, I think this is problem.
Thanks,
Mikhail.
On Mon, Sep 11, 2017 at 7:52 PM, Kenan Dalley wrote:
> I provided the correct classes above and the CQL statements are 100%
> correct.
> Please see below to know how the data
I provided the correct classes above and the CQL statements are 100% correct.
Please see below to know how the data relates to each column. It looks like
you're skipping over 1 of the first 5 columns when reading through which is
why you see a null starting at col5 rather than starting at col6.
Hm... Did you provide right source code for the classes and CQL statements?
Here is the sample of INSERT statement you previously provided:
*INSERT INTO test_response
(col1,col2,col3,col4,col10,col5,col6,col7,col8,col9) VALUES
Hi,
I tried to reproduce the issue on 'master' and it seems it was resolved by
IGNITE-5843 & IGNITE-5075.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
That would make sense if col5 was null, but it's only col6 & col7 that are
null and they are both String columns which allow for null. In addition, if
col5 was null, then the cache wouldn't load upon doing a manual load which
it does. Also, just in case, I went ahead and changed the "long" types
Hi Mikhail,
Same problem still exists in ignite ver. 2.1.0#20170720-sha1:a6ca5c8a.
As a workaround, I manually clear the cache with iterating and deleting
entries before destroying cache:
IgniteCache
Hi,
Unless you customize affinity function there are always 1024 partitions and
every node holds one or more primary partitions evenly distributed among
the nodes. The affinity function tries to minimize re-partitioning as
topology changes. Just want to confirm - are you really looking for an
I don't think entryprocessor with work because the anonymous class will not
be present at ignite server node classpath as our aim is to avoid classes
at server's classpath.
On Sep 11, 2017 4:30 PM, "Andrey Mashenkov"
wrote:
> I've created a ticket for this [1].
> As
Hi All,
When the topology cluster change, possible a backup cache become primary.
Is there any notification to tell this node, said you are the primary now -- if
my cache in REPLICATION mode.
Maybe other part logic do some preparation stuff related with business.
Regards
Aaron
Hi
In the ignite docs its given that ignite sql queries are internally
converted to cache operations. What are the cache operations performed when
update query is executed on a row? Do these cache operations to be performed
depend on type of persistent storage used.. for example if cassandra is
Hi Chandrika,
I would use ComputeTaskSession and ComputeTaskSessionAttributeListener to
achieve that:
- Inject ComputeTaskSession to your task and/or jobs like
this: @TaskSessionResource private ComputeTaskSession taskSes;
You also need to annotate your task with
hi @yakov
yakov wrote
> Yes, however, you can still return results from each job and use it.
> Please
> see javadoc for org.apache.ignite.compute.ComputeJobResult#getData
yes, it's good to have such opportunity at least on "result" step.
But still I'm very curious, why the overhead is so big
I've created a ticket for this [1].
As a workaround you can try to use cache.invoke() with own comparison
implementation inside EntryProcessor.
Unfortunately, there is no release dated filled on apache ignite releases
page [2].
Usually, new Ignite release become available twice a year.
[1]
Hello All,
we have a parent task which is comprised of three jobsiblings, we need to
handle dependent tasks.
So i was looking for listeners for jobsiblings and parent tasks to handle
dependencies. please let me know if there are any event listeners of
JobWasExecuted, JobToBeExecuted for
Hi Rishikesh,
Is it possible to create another kafka stream based on Curr_stream1 &
Curr_stream2?
In this case, you will be able to stream (Curr_stream1.f0 - Curr_stream2.f0)
into a new Ignite cache and use continuous query.
In any way, it would be great if you can share your solution with the