f02061232-1 opened a new issue #4414: wrong index name when automatically 
metrics data delete
URL: https://github.com/apache/skywalking/issues/4414
 
 
   Please answer these questions before submitting your issue.
   
   - Why do you submit this issue?
   - [ ] Question or discussion
   - [X] Bug
   - [ ] Requirement
   - [ ] Feature or performance improvement
   
   ___
   ### Question
   - What do you want to know?
   
   ___
   ### Bug
   - Which version of SkyWalking, OS and JRE?
   skywalking 6.2 macOS, java version 1.8.0_144
   - Which company or project?
   
   - What happen?
   If possible, provide a way for reproducing the error. e.g. demo application, 
component version.
   
   When DataTTLKeeperTimer is running, system start to delete expired metrics 
data, however, wrong index name was generated during deletion.
   Error trace:
   2020-02-25 02:03:59,390 - 
org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer - 61 
[pool-13-thread-1] INFO  [] - Beginning to remove expired metrics from the 
storage.
   2020-02-25 02:03:59,441 - 
org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer - 51 
[pool-13-thread-1] ERROR [] - Remove data in background failure.
   org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception 
[type=index_not_found_exception, reason=no such index]
           at 
org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
 ~[elasticsearch-6.3.2.jar:6.3.2]
           at 
org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:653)
 ~[elasticsearch-rest-high-level-client-6.3.2.jar:6.3.2]
           at 
org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:628)
 ~[elasticsearch-rest-high-level-client-6.3.2.jar:6.3.2]
           at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:535)
 ~[elasticsearch-rest-high-level-client-6.3.2.jar:6.3.2]
           at 
org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:508)
 ~[elasticsearch-rest-high-level-client-6.3.2.jar:6.3.2]
           at 
org.elasticsearch.client.IndicesClient.delete(IndicesClient.java:77) 
~[elasticsearch-rest-high-level-client-6.3.2.jar:6.3.2]
           at 
org.apache.skywalking.oap.server.library.client.elasticsearch.ElasticSearchClient.deleteIndex(ElasticSearchClient.java:152)
 ~[library-client-6.2.0.jar:6.2.0]
           at 
org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.HistoryDeleteEsDAO.deleteHistory(HistoryDeleteEsDAO.java:72)
 ~[storage-elasticsearch-plugin-6.2.0.jar:6.2.0]
           at 
org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.execute(DataTTLKeeperTimer.java:73)
 ~[server-core-6.2.0.jar:6.2.0]
           at 
org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.lambda$delete$1(DataTTLKeeperTimer.java:66)
 ~[server-core-6.2.0.jar:6.2.0]
           at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_144]
           at 
org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.delete(DataTTLKeeperTimer.java:64)
 ~[server-core-6.2.0.jar:6.2.0]
           at 
org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36)
 [apm-util-6.2.0.jar:6.2.0]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_144]
           at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[?:1.8.0_144]
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_144]
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [?:1.8.0_144]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_144]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_144]
           at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
           Suppressed: org.elasticsearch.client.ResponseException: method 
[DELETE], host [http://172.16.120.60:9200], URI 
[/skywalking-cluster_skywalking-cluster_alarm_record-20200220?master_timeout=30s&ignore_unavailable=false&expand_wildcards=open%2Cclosed&allow_no_indices=true&timeout=30s],
 status line [HTTP/1.1 404 Not Found]
   {"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no 
such 
index","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"skywalking-cluster_skywalking-cluster_alarm_record-20200220","index":"skywalking-cluster_skywalking-cluster_alarm_record-20200220"}],"type":"index_not_found_exception","reason":"no
 such 
index","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"skywalking-cluster_skywalking-cluster_alarm_record-20200220","index":"skywalking-cluster_skywalking-cluster_alarm_record-20200220"},"status":404}
   
   Index name generated here is 
"skywalking-cluster_skywalking-cluster_alarm_record-20200220" which should be 
"skywalking-cluster_alarm_record-20200220", prefix "skywalking-cluster" is our 
elasticsearch cluster name, it was added twice here.
   
   Here is config:
   storage:
     elasticsearch:
       nameSpace: ${SW_NAMESPACE:"skywalking-cluster"}
       clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:xxx.xxx.xxx.xxx:9200}
       #user: ${SW_ES_USER:""}
       #password: ${SW_ES_PASSWORD:""}
       indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:5}
       indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1}
       # Those data TTL settings will override the same settings in core module.
       recordDataTTL: ${SW_STORAGE_ES_RECORD_DATA_TTL:14} # Unit is day
       otherMetricsDataTTL: ${SW_STORAGE_ES_OTHER_METRIC_DATA_TTL:4} # Unit is 
day
       monthMetricsDataTTL: ${SW_STORAGE_ES_MONTH_METRIC_DATA_TTL:3} # Unit is 
month
       # Batch process setting, refer to 
https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html
       bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:500} # Execute the bulk every 
2000 requests
       bulkSize: ${SW_STORAGE_ES_BULK_SIZE:10} # flush the bulk every 20mb
       flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:30} # flush the bulk every 
10 seconds whatever the number of requests
       concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:4} # the number 
of concurrent requests
       metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:8000}
       segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
   
   
   ___
   ### Requirement or improvement
   - Please describe about your requirements or improvement suggestions.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to