Re: Request handler for HTTP call for getting the topology

2023-07-25 Thread Gianluca Bonetti
Helo

Please refer to the latest version, 2.9.0 is pretty old.

Have a look to the section about security for the REST APIs, on the same
documentation page.
https://ignite.apache.org/docs/2.15.0/restapi#security

You would have problems when looking for support from the community with a
version so old.
In case you need to use an older release, you'd better plan for commercial
support from GridGain (I am not affiliated to the company)

Cheers
Gianluca

On Tue, 25 Jul 2023 at 18:38, Amitkumar Maheshwari <
amitkumar.maheshw...@automationanywhere.com> wrote:

> I have been looking into the document
> https://ignite.apache.org/docs/2.9.0/restapi#topology
>
>
>
> From this, I am making sense that there exists an endpoint to get the
> topology regarding information out of ignite.
>
>
>
> Now that I am looking for the relevant code, I could find one test case as
> well in “JettyRestProcessorSignedSelfTest”
>
> But this test only checks w.r.t. “Authorized” and “Unauthorized” request.
>
>
>
> I need to understand what is expected from this end point, and also where
> is the associated request handler implementation.
>
>
>
> Thanks and Regards,
>
> *Amitkumar Maheshwari ** | * Sr, Software Engineer
>
> www.automationanywhere.com/in
>
> Automation Anywhere
>
> Ground Floor (North West Part) Alembic Business Park Premises Alembic Road
> Gorwa.
>
>
>
>
>


Request handler for HTTP call for getting the topology

2023-07-25 Thread Amitkumar Maheshwari
I have been looking into the document 
https://ignite.apache.org/docs/2.9.0/restapi#topology

>From this, I am making sense that there exists an endpoint to get the topology 
>regarding information out of ignite.

Now that I am looking for the relevant code, I could find one test case as well 
in "JettyRestProcessorSignedSelfTest"
But this test only checks w.r.t. "Authorized" and "Unauthorized" request.

I need to understand what is expected from this end point, and also where is 
the associated request handler implementation.

Thanks and Regards,
Amitkumar Maheshwari  |  Sr, Software Engineer
www.automationanywhere.com/in
Automation Anywhere
Ground Floor (North West Part) Alembic Business Park Premises Alembic Road 
Gorwa.




Query produced big result set.

2023-07-25 Thread Humphrey Lopez
Hello all,

When we are executing some query on Ignite we are seeing the warning
message:
"Query produced big result set."
fetched=10, duration=676ms, type=LOCAL, distributedJoin=false,
enforceJoinOrder=false, lazy=false

Diving into the code I found it here:
org.apache.ignite.internal.processors.query.running.HeavyQueriesTracker

It has a default threshold of 100_000, and when our queries are returning
more than 100_000 results we get that log message.

The code snippet is:
/**
* Print warning message to log when query result size fetch count is bigger
than specified threshold.
* Threshold may be recalculated with multiplier.
*/
public void checkOnFetchNext() {
++fetchedSize;

if (threshold > 0 && fetchedSize >= threshold) {
LT.warn(log, BIG_RESULT_SET_MSG + qryInfo.queryInfo("fetched=" + fetchedSize
));

if (thresholdMult > 1)
threshold *= thresholdMult;
else
threshold = 0;

bigResults = true;
}
}

But I don't see a place where I can define a threshold in Ignite
(IgniteConfiguration, CacheConfiguration or SqlConfiguration etc) I would
like to set it on the query or on the cache or ignite configuration.

I do see a usage in
org.apache.ignite.internal.processors.query.running.SqlQueryMXBeanImpl
/** {@inheritDoc} */
@Override public void setResultSetSizeThreshold(long rsSizeThreshold) {
heavyQrysTracker.setResultSetSizeThreshold(rsSizeThreshold);
}

But this is not a configuration item.

- Where can I (set / configure) this Threshold in Ignite to a higher value
so we don't get those warnings any longer?
- if we increase the number of nodes, will it help (less data on one node)?

Thanks.

Humphrey


Re: Ignite SQL

2023-07-25 Thread Arunima Barik
Why does spark.sql() take more time than client.sql() when query is same
and is made to ignite dataframe only??

Regards

On Tue, 18 Jul, 2023, 10:40 pm Stephen Darlington, <
stephen.darling...@gridgain.com> wrote:

> “Correct” is hard to quantify without knowing your use case, but option 1
> is probably what you want. Spark pushes down SQL execution to Ignite, so
> you get all the distribution, use of indexes, etc.
>
> > On 14 Jul 2023, at 16:12, Arunima Barik 
> wrote:
> >
> > Hello team
> >
> > What is the correct way out of these?
> >
> > 1. Write a spark dataframe to ignite
> > Read the same back and perform spark.sql() on that
> >
> > 2. Write the spark dataframe to ignite
> > Connect to server via a thin client
> > Perform client.sql()
> >
> > Regards
> > Arunima
>
>


Re: Cache write synchronization mode

2023-07-25 Thread Pavel Tupitsyn
> If the primary node 'comes back' after the primary node failure would you
expect the new value to propagate to all nodes?

I don't think so, but I'm not 100% sure - could you ask about this specific
case in a separate thread?

On Tue, Jul 25, 2023 at 8:50 AM Raymond Wilson 
wrote:

> >>  However, if a primary node fails before at least 1 backup
> node receives an update, then the update will be lost, and all nodes will
> have the old value.
>
> Does this imply that it is a good idea to have the FullSync write
> synchronization mode? If the primary node 'comes back' after the primary
> node failure would you expect the new value to propagate to all nodes?
>
>
> On Tue, Jul 25, 2023 at 5:22 PM Pavel Tupitsyn 
> wrote:
>
>> > if a hard failure occurs to one of the backup servers in the replicated
>> cache will the server that failed have an inconsistent (old) copy of that
>> element in the replicated cache when it restarts
>>
>> If only a backup server fails and restarts, it will get new data from the
>> primary node, no issue here.
>> However, if a primary node fails before at least 1 backup node receives
>> an update, then the update will be lost, and all nodes will have the old
>> value.
>>
>> Related: CacheConfiguration.ReadFromBackup property is true by default,
>> meaning that with PrimarySync it is possible to get old value from a backup
>> node after an update, before backups receive new data.
>>
>> On Mon, Jul 24, 2023 at 11:51 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Hi Pavel,
>>>
>>> I understand the differences between the sync modes in terms of when the
>>> write returns. What I want to understand is if there are consistency risks
>>> with the PrimarySync versus FullSync modes.
>>>
>>> For example, if I have 4 nodes participating in the replicated cache
>>> (and am using the default PrimarySync mode), then the write will return
>>> once the primary node in the replicated cache has completed the write. At
>>> that point if a hard failure occurs to one of the backup servers in the
>>> replicated cache will the server that failed have an inconsistent (old)
>>> copy of that element in the replicated cache when it restarts?
>>>
>>> Raymond.
>>>
>>>
>
> --
> 
> Raymond Wilson
> Trimble Distinguished Engineer, Civil Construction Software (CCS)
> 11 Birmingham Drive | Christchurch, New Zealand
> raymond_wil...@trimble.com
>
>
> 
>