Re: How to access IGFS file written one node from other node in cluster ??

2020-02-24 Thread Preetiiii
If I configure Ignite File system configuration to spark or Hadoop then am I
able to write and read files from other node using normal java application
which I have written ?? If I source code need to modify just tell where to
modify as I am new to Ignite. Thanks in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is Apache ignite support tiering or it only support caching??

2020-02-24 Thread Preetiiii
I am new to ignite. I think in data regions we are specifying how much size
we want to allocate and it is from RAM. Yaa,I agree that we can de slice or
dice memory but how to specify other tier than RAM. What if I want to
allocate 1gb data region from dev/sda or any available device ?? 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is Apache ignite support tiering or it only support caching??

2020-02-24 Thread Denis Magda
Please check out this documentation about data regions configuration. With
the regions, you can slice and dice your available memory space and enable
persistence tier per region.
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions

-
Denis


On Fri, Feb 21, 2020 at 4:21 AM Preet  wrote:

> I want to different tier like DRAM, available devices as a tier. How to
> specify such tier option.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Long running query

2020-02-24 Thread Denis Magda
Please check you followed all standard recommendations summarized on this
page:
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/sql-tuning

Pay attention to the "Basic Considerations: GridGain vs RDBMS" section.

-
Denis


On Fri, Feb 21, 2020 at 3:45 AM breathem  wrote:

> Hello,
> We have two tables LD (8 000 000 rows) and DRUGS (130 000 rows).
> Following query is executed ~7 minutes that is significantly longer then in
> RDBMS (~1,5 sec):
> select d.drug_id, d.drug_name, ld.price
> from drugs d
> left outer join ld on d.drug_id = ld.drug_id and ld.org_id = 264;
>
> Explain for query:
> SELECT
> D__Z0.DRUG_ID AS __C0_0,
> D__Z0.DRUG_NAME AS __C0_1,
> __Z1.PRICE AS __C0_2
> FROM PUBLIC.DRUGS D__Z0
> /* PUBLIC.IDX_DRUG_ID_NAME */
> LEFT OUTER JOIN PUBLIC.LD __Z1
> /* PUBLIC.IDX_ORG_MEDP_DRUG: ORG_ID = 264
> AND DRUG_ID = D__Z0.DRUG_ID
>  */
> ON (__Z1.ORG_ID = 264)
> AND (D__Z0.DRUG_ID = __Z1.DRUG_ID)
> SELECT
> __C0_0 AS DRUG_ID,
> __C0_1 AS DRUG_NAME,
> __C0_2 AS PRICE
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */
>
> Indexes on table LD: IDX_ORG_MEDP_DRUGS(ORG_ID, MEDP_ID, DRUG_ID),
> IDX_DRUG_ID(DRUG_ID)
> Indexes on table DRUGS: IDX_DRUG_ID_NAME(DRUG_ID, DRUG_NAME)
>
> We try to force IDX_DRUG_ID:
> select d.drug_id, d.drug_name, ld.price
> from drugs d
> left outer join ld use index (idx_drug_id) on d.drug_id = ld.drug_id and
> ld.org_id = 264;
>
> This query is executed 8 sec.
>
> Explain for query:
> SELECT
> D__Z0.DRUG_ID AS __C0_0,
> D__Z0.DRUG_NAME AS __C0_1,
> __Z1.PRICE AS __C0_2
> FROM PUBLIC.DRUGS D__Z0
> /* PUBLIC.IDX_DRUG_ID_NAME */
> LEFT OUTER JOIN PUBLIC.LD __Z1 USE INDEX (IDX_DRUG_ID)
> /* PUBLIC.IDX_DRUG_ID: DRUG_ID = D__Z0.DRUG_ID */
> ON (__Z1.ORG_ID = 264)
> AND (D__Z0.DRUG_ID = __Z1.DRUG_ID)
> SELECT
> __C0_0 AS DRUG_ID,
> __C0_1 AS DRUG_NAME,
> __C0_2 AS PRICE
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */
>
> How to speed up query?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Offheap memory consumption + enabled persistence

2020-02-24 Thread mikle-a
Hi Dmitry!

I didn't understand about "pure data size" :( Could you please specify how
to calculate it?

Despite this, I've added monitoring for mentioned cache folders and retested
case with random keys and enabled persistence.

Overall test chart:
 

Last 30 min:
 

As I can see, cache directory grows along with "offheap memory used".

Thanks in advance for your help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read through not working as expected in case of Replicated cache

2020-02-24 Thread Prasad Bhalerao
Hi,

Is this a bug or the cache is designed to work this way?

If it is as-designed, can this behavior be updated in ignite documentation?

Thanks,
Prasad

On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> I have discussed this with fellow Ignite developers, and they say read
> through for replicated cache would work where there is either:
>
> - writeThrough enabled and all changes do through it.
> - database contents do not change for already read keys.
>
> I can see that neither is met in your case, so you can expect the behavior
> that you are seeing.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 29 окт. 2019 г. в 18:18, Akash Shinde :
>
>> I am using Ignite 2.6 version.
>>
>> I am starting 3 server nodes with a replicated cache and 1 client node.
>> Cache configuration is as follows.
>> Read-through true on but write-through is false. Load data by key is
>> implemented as given below in cache-loader.
>>
>> Steps to reproduce issue:
>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>> is just removed from cache but present in DB as write-through is false)
>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>> 3) Now query the cache from client node. Every invocation returns
>> different results.
>> Sometimes it returns reloaded entry, sometime returns the results
>> without reloaded entry.
>>
>> Looks like read-through is not replicating the reloaded entry on all
>> nodes in case of REPLICATED cache.
>>
>> So to investigate further I changed the cache mode to PARTITIONED and set
>> the backup count to 3 i.e. total number of nodes present in cluster (to
>> mimic REPLICATED behavior).
>> This time it worked as expected.
>> Every invocation returned the same result with reloaded entry.
>>
>> *  private CacheConfiguration networkCacheCfg() {*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *CacheConfiguration networkCacheCfg = new
>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>> ());
>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> networkCacheCfg.setWriteThrough(false);
>> networkCacheCfg.setReadThrough(true);
>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>   //networkCacheCfg.setBackups(3);
>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>> Factory storeFactory =
>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>> NetworkData.class);networkCacheCfg.setSqlIndexMaxInlineSize(65);
>> RendezvousAffinityFunction affinityFunction = new
>> RendezvousAffinityFunction();
>> affinityFunction.setExcludeNeighbors(false);
>> networkCacheCfg.setAffinity(affinityFunction);
>> networkCacheCfg.setStatisticsEnabled(true);   //
>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());return
>> networkCacheCfg;  }*
>>
>> @Override
>> public V load(K k) throws CacheLoaderException {
>> V value = null;
>> DataSource dataSource = springCtx.getBean(DataSource.class);
>> try (Connection connection = dataSource.getConnection();
>>  PreparedStatement statement = 
>> connection.prepareStatement(loadByKeySql)) {
>> //statement.setObject(1, k.getId());
>> setPreparedStatement(statement,k);
>> try (ResultSet rs = statement.executeQuery()) {
>> if (rs.next()) {
>> value = rowMapper.mapRow(rs, 0);
>> }
>> }
>> } catch (SQLException e) {
>>
>> throw new CacheLoaderException(e.getMessage(), e);
>> }
>>
>> return value;
>> }
>>
>>
>> Thanks,
>>
>> Akash
>>
>>


Re: How to access IGFS file written one node from other node in cluster ??

2020-02-24 Thread Andrei Aleksandrov

Hi,

I can suggest to use the cache store implementation. For example current 
guide 
 
shows how Hive schema can be imported using web console.


In case if you require to work with files (not tables) then please use 
HDFS or Spark API directly. Ignite provides good Spark integration:


https://apacheignite-fs.readme.io/docs/ignite-data-frame

BR,
Andrei

2/21/2020 6:52 PM, Preet пишет:

Then how to create shared file system or is there any way to access/modify
file written by one node in cluster by other node ??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/