Hi Billy,
Yes that’s what I did, I enabled remote debugging for the running instance and 
checked:

kylin-2.3/kylin/server-base/src/main/java/org/apache/kylin/rest/service/CubeService.java
has method: updateOnNewSegmentReady(String cubeName) that actually invokes 2 
methods on the event
keepCubeRetention(cubeName);
mergeCubeSegment(cubeName);
This method is actually invoked by a spring bean 
/kylin-2.3/kylin/server-base/src/main/java/org/apache/kylin/rest/service/CacheService.java
via a Broadcaster.Listener cacheListener;

That binds to the thread: 
@Override
public void onEntityChange(Broadcaster broadcaster, String entity, Event event, 
String cacheKey)
throws IOException {
logger.info("Initializing Event-BroadCast-Thread”); //was added by me, never 
reached
    if ("cube".equals(entity) && event == Event.UPDATE) { //added a debug point 
here, never reached
        final String cubeName = cacheKey;
        new Thread() { // do not block the event broadcast thread
            public void run() {
                try {
                    Thread.sleep(1000);
                    cubeService.updateOnNewSegmentReady(cubeName);
                } catch (Throwable ex) {
                    logger.error("Error in updateOnNewSegmentReady()", ex);
                }
            }
        }.start();
    }
}
This method is not being triggered, I have added a debug point as well as 
logger in here, neither logger.error("Error in updateOnNewSegmentReady()", ex) 
is happening;
is shown in logs. So either this thread is becoming hung(hanging thread issue) 
or something else;

One thing that I would like to mention here is that, On the same node we are 
running 2 Kylin instances with exactly same code base, one on 7070(DEV) other 
on 7072(TEST) different JVM’s of course(port was changed using 
kylin-replace-port-util).

Important: the listener thread is getting fired on 7070(DEV) and the auto merge 
and retention is working fine but not on 7072(TEST); Both instances are using 
exactly same kylin.properties but pointing to different metadata of their own.

If you have any insights that would be great, will spend some more time into 
debugging this hanging thread issue.
 
Thanks,
Ketan@Exponential

> On 10-Apr-2018, at 6:31 PM, Billy Liu <billy...@apache.org> wrote:
> 
> Hello Ketan
> 
> If the merge is not triggered, there should be some logs saying why
> the merge is ignored. That log is not exception or warn, but INFO you
> what the system is expecting to do.
> 
> With Warm regards
> 
> Billy Liu
> 
> 
> 2018-04-10 15:34 GMT+08:00 ketan dikshit <kdcool6...@yahoo.com.invalid>:
>> Hi Team,
>> I am working with Kylin 2.3.1 for last few days, have seen that Kylin 
>> auto-merge functionality is not working.
>> I have not set the volatile range(default=0), so should be default behaviour.
>> 
>> Also not able to see any error in logs as well. So basically its not being 
>> triggered.
>> Here is the commit_sha ID: 1e322992b8011d3b430321599d6da762c1a5e6b9
>> 
>> Could you please point me how to debug or resolve the same, what could be 
>> the potential errors/blockers in here ?
>> 
>> And Here are my kylin.properties,(nothing special here)
>> 
>> kylin.web.timezone=US/Pacific
>> kylin.metadata.url=kylin2.3.1Meta@hbase
>> kylin.storage.url=hbase
>> kylin.env.hdfs-working-dir=/tmp/kylin-2.3.1-DEV
>> kylin.engine.mr.reduce-input-mb=300
>> kylin.server.mode=all
>> kylin.env=TEST
>> kylin.job.max-concurrent-jobs=10
>> kylin.engine.mr.yarn-check-interval-seconds=10
>> kylin.storage.hbase.table-name-prefix=KYL_TEST_
>> kylin.source.hive.database-for-flat-table=kylin
>> kylin.storage.hbase.compression-codec=lz4
>> kylin.storage.hbase.region-cut-gb=3
>> kylin.storage.hbase.min-region-count=1
>> kylin.storage.hbase.max-region-count=5000
>> kylin.storage.partition.max-scan-bytes=16106127360
>> kylin.storage.hbase.coprocessor-mem-gb=6
>> kylin.security.profile=testing
>> kylin.query.cache-enabled=true
>> kylin.query.cache-threshold-duration=500
>> kylin.query.cache-threshold-scan-count=10240
>> kylin.storage.hbase.scan-cache-rows=4096
>> 
>> Any help would be appreciated.
>> 
>> Thanks,
>> Ketan@Exponential
>> 
>> 

Reply via email to