[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-10-22 Thread Smalyshev
Smalyshev added a comment. Also I notice the timestamp on 1003 is not advancing: Oct 23 00:54:40 wdqs1003 wdqs-updater[10071]: 00:54:40.692 [main] INFO org.wikidata.query.rdf.tool.Updater - Polled up to 2018-10-23T00:43:08Z at (4.9, 8.0, 5.1) updates per second and (1.8, 2256.6, 3258.3)

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-10-22 Thread Smalyshev
Smalyshev added a comment. Getting also this now: Oct 23 00:41:49 wdqs1003 wdqs-updater[10071]: 00:41:49.901 [main] ERROR o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=wdqs1003] Offset commit failed on partition eqiad.mediawiki.page-undelete-0 at offset 136517: The

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-10-02 Thread gerritbot
gerritbot added a comment. Change 463248 merged by Gehel: [operations/puppet@production] wdqs: don't send nginx logs to logstash https://gerrit.wikimedia.org/r/463248TASK DETAILhttps://phabricator.wikimedia.org/T200563EMAIL

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-09-28 Thread Gehel
Gehel added a comment. In T200563#4623531, @Smalyshev wrote: Great work! Thanks (I'll forward to @Volans) I am not sure though why logging would be that much of an issue, shouldn't the log code take care of batching it, etc.? As for not logging nginx - do we have these logs somewhere else? If

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-09-27 Thread Smalyshev
Smalyshev added a comment. Great work! I am not sure though why logging would be that much of an issue, shouldn't the log code take care of batching it, etc.? As for not logging nginx - do we have these logs somewhere else? If yes, then I guess we can stop that. We could probably tune Kafka

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-09-27 Thread gerritbot
gerritbot added a comment. Change 463254 had a related patch set uploaded (by Gehel; owner: Gehel): [operations/puppet@production] wdqs: cleanup logback configuration https://gerrit.wikimedia.org/r/463254TASK DETAILhttps://phabricator.wikimedia.org/T200563EMAIL

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-09-27 Thread gerritbot
gerritbot added a comment. Change 463248 had a related patch set uploaded (by Gehel; owner: Gehel): [operations/puppet@production] wdqs: don't send nginx logs to logstash https://gerrit.wikimedia.org/r/463248TASK DETAILhttps://phabricator.wikimedia.org/T200563EMAIL

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-09-14 Thread Smalyshev
Smalyshev added a comment. So, weird thing: now that we switched data centers, wdqs2003 is showing the same anomaly. Could it be that our load balancing is not balancing the load evenly for these hosts?TASK DETAILhttps://phabricator.wikimedia.org/T200563EMAIL

[Wikidata-bugs] [Maniphest] [Commented On] T200563: wdq1003 is anomalous

2018-07-31 Thread Stashbot
Stashbot added a comment. Mentioned in SAL (#wikimedia-operations) [2018-07-31T12:29:06Z] rebalance LVS weights to send less traffic to wdqs1003 - T200563TASK DETAILhttps://phabricator.wikimedia.org/T200563EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: