[MediaWiki-commits] [Gerrit] analytics/refinery[master]: Chmod yearly mediacounts directory so ezachte's scripts can ...

2018-01-23 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405938 )

Change subject: Chmod yearly mediacounts directory so ezachte's scripts can 
write top1000 files
..

Chmod yearly mediacounts directory so ezachte's scripts can write top1000 files

Bug: T185419
Change-Id: Ie3849f6869b40c4174d30bfeff8c8f04b5a5d079
---
M oozie/mediacounts/archive/workflow.xml
1 file changed, 12 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/analytics/refinery 
refs/changes/38/405938/1

diff --git a/oozie/mediacounts/archive/workflow.xml 
b/oozie/mediacounts/archive/workflow.xml
index 1c0b11d..fdc4ec0 100644
--- a/oozie/mediacounts/archive/workflow.xml
+++ b/oozie/mediacounts/archive/workflow.xml
@@ -155,6 +155,18 @@
 
 
 
+
+
+
+
+
+
+<--
+We need newly created year directories to be group writeable so 
@ezachte
+can create top1000 files.  T185419
+-->
+
+
 
 
 

-- 
To view, visit https://gerrit.wikimedia.org/r/405938
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie3849f6869b40c4174d30bfeff8c8f04b5a5d079
Gerrit-PatchSet: 1
Gerrit-Project: analytics/refinery
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] mediawiki/vagrant[master]: Update Kafka to 1.0 with SSL support

2018-01-23 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404870 )

Change subject: Update Kafka to 1.0 with SSL support
..


Update Kafka to 1.0 with SSL support

This will make testing Mediawiki integration with Kafka and SSL easier

Bug: T126494
Change-Id: I93d7c7cb98664e3e41b5a383ba8f9976a0b09099
---
M puppet/modules/kafka/files/kafka.profile.sh
M puppet/modules/kafka/files/kafka.sh
D puppet/modules/kafka/files/server.properties
A puppet/modules/kafka/files/ssl/kafka_broker/ca.crt.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.crt.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.csr.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.key.private.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.key.public.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.keystore.jks
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.keystore.p12
A puppet/modules/kafka/files/ssl/kafka_broker/truststore.jks
A puppet/modules/kafka/files/ssl/local_ca/ca.crt.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.crt.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.csr.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.key.private.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.key.public.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.keystore.jks
A puppet/modules/kafka/files/ssl/local_ca/local_ca.keystore.p12
A puppet/modules/kafka/files/ssl/local_ca/truststore.jks
A puppet/modules/kafka/files/ssl/test0/ca.crt.pem
A puppet/modules/kafka/files/ssl/test0/test0.crt.pem
A puppet/modules/kafka/files/ssl/test0/test0.csr.pem
A puppet/modules/kafka/files/ssl/test0/test0.key.private.pem
A puppet/modules/kafka/files/ssl/test0/test0.key.public.pem
A puppet/modules/kafka/files/ssl/test0/test0.keystore.jks
A puppet/modules/kafka/files/ssl/test0/test0.keystore.p12
A puppet/modules/kafka/files/ssl/test0/truststore.jks
M puppet/modules/kafka/manifests/init.pp
A puppet/modules/kafka/templates/server.properties.erb
M puppet/modules/kafka/templates/systemd/kafka.erb
M puppet/modules/role/settings/kafka.yaml
31 files changed, 421 insertions(+), 119 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  BryanDavis: Looks good to me, but someone else must approve



diff --git a/puppet/modules/kafka/files/kafka.profile.sh 
b/puppet/modules/kafka/files/kafka.profile.sh
index ab3ed80..f1f2a8a 100644
--- a/puppet/modules/kafka/files/kafka.profile.sh
+++ b/puppet/modules/kafka/files/kafka.profile.sh
@@ -3,5 +3,6 @@
 # These environment variables are used by the kafka CLI
 # so that you don't have to provide them as arguments
 # every time you use it.
-export ZOOKEEPER_URL=localhost:2181
-export BROKER_LIST=localhost:9092
+export KAFKA_ZOOKEEPER_URL=localhost:2181/kafka
+export KAFKA_BOOTSTRAP_SERVERS=localhost:9092
+export KAFKA_JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
diff --git a/puppet/modules/kafka/files/kafka.sh 
b/puppet/modules/kafka/files/kafka.sh
index e7db2bb..e2c1c8b 100755
--- a/puppet/modules/kafka/files/kafka.sh
+++ b/puppet/modules/kafka/files/kafka.sh
@@ -1,5 +1,7 @@
 #!/bin/bash
 
+# NOTE: This file is managed by Puppet.
+
 SCRIPT_NAME=$(basename "$0")
 
 commands=$(ls /usr/bin/kafka-* | xargs -n 1 basename | sed 's@kafka-@  @g')
@@ -8,9 +10,9 @@
 $SCRIPT_NAME  [options]
 
 Handy wrapper around various kafka-* scripts.  Set the environment variables
-ZOOKEEPER_URL and BROKER_LIST so you don't have to keep typing
---zookeeper-connect or --broker-list each time you want to use a kafka-*
-script.
+KAFKA_ZOOKEEPER_URL, KAFKA_BOOTSTRAP_SERVERS so you don't have to keep typing
+--zookeeper-connect, --broker-list or --bootstrap-server each time you want to
+use a kafka-* script.
 
 Usage:
 
@@ -20,11 +22,18 @@
 $commands
 
 Environment Variables:
-  ZOOKEEPER_URL - If this is set, any commands that take a --zookeeper flag 
will be given this value.
-  BROKER_LIST   - If this is set, any commands that take a --broker-list flag 
will be given this value.
+  KAFKA_JAVA_HOME - Value of JAVA_HOME to use for invoking Kafka 
commands.
+  KAFKA_ZOOKEEPER_URL - If this is set, any commands that take a 
--zookeeper
+flag will be given this value.
+  KAFKA_BOOTSTRAP_SERVERS - If this is set, any commands that take a 
--broker-list or
+--bootstrap-server flag will be given this value.
+Also any command that take a 
--authorizer-properties
+will get the correct zookeeper.connect value.
+
 "
 
-if [ -z "${1}" -o ${1:0:1} == '-' ]; then
+# Print usage if no  given, or $1 starts with '-'
+if [ -z "${1}" -o "${1:0:1}" == '-' ]; then
 echo "${USAGE}"
 exit 1
 fi
@@ -33,43 +42,77 @@
 command="kafka-${1}"
 shift
 
+# Export JAVA_HOME as KAFKA_JAVA_HOME if it is set.
+# This makes kafka-run-class use the 

[MediaWiki-commits] [Gerrit] operations...spark2[debian]: 2.2.1 binary release for Hadoop 2.6

2018-01-23 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405894 )

Change subject: 2.2.1 binary release for Hadoop 2.6
..

2.2.1 binary release for Hadoop 2.6

Bug: T185581
Change-Id: Iffc3a0b77b257e3e4a956257a1a1a654895f2cb1
---
M debian/changelog
M debian/source/include-binaries
2 files changed, 78 insertions(+), 70 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/debs/spark2 
refs/changes/94/405894/1

diff --git a/debian/changelog b/debian/changelog
index e7233a3..b1c327d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+spark2 (2.2.1-bin-hadoop2.6-1~jessie1) jessie-wikimedia; urgency=medium
+
+  * 2.2.1 binary release for Hadoop 2.6
+
+ -- Andrew Otto (WMF)   Tue, 23 Jan 2018 15:46:46 +
+
 spark2 (2.1.2-bin-hadoop2.6-3~jessie1) jessie-wikimedia; urgency=low
 
   * 2.1.2 -3 release for Hadoop 2.6
diff --git a/debian/source/include-binaries b/debian/source/include-binaries
index 527a4b2..baa19da 100644
--- a/debian/source/include-binaries
+++ b/debian/source/include-binaries
@@ -1,29 +1,12 @@
-R/lib/SparkR/Meta/Rd.rds
-R/lib/SparkR/Meta/features.rds
-R/lib/SparkR/Meta/hsearch.rds
-R/lib/SparkR/Meta/links.rds
-R/lib/SparkR/Meta/nsInfo.rds
-R/lib/SparkR/Meta/package.rds
-R/lib/SparkR/Meta/vignette.rds
-R/lib/SparkR/R/SparkR.rdb
-R/lib/SparkR/R/SparkR.rdx
-R/lib/SparkR/help/SparkR.rdb
-R/lib/SparkR/help/SparkR.rdx
-R/lib/SparkR/help/aliases.rds
-R/lib/SparkR/help/paths.rds
-R/lib/sparkr.zip
-jars/JavaEWAH-0.3.2.jar
-jars/RoaringBitmap-0.5.11.jar
-jars/ST4-4.0.4.jar
 jars/activation-1.1.1.jar
 jars/antlr-2.7.7.jar
-jars/antlr-runtime-3.4.jar
 jars/antlr4-runtime-4.5.3.jar
+jars/antlr-runtime-3.4.jar
 jars/aopalliance-1.0.jar
 jars/aopalliance-repackaged-2.4.0-b34.jar
-jars/apache-log4j-extras-1.2.17.jar
 jars/apacheds-i18n-2.0.0-M15.jar
 jars/apacheds-kerberos-codec-2.0.0-M15.jar
+jars/apache-log4j-extras-1.2.17.jar
 jars/api-asn1-api-1.0.0-M20.jar
 jars/api-util-1.0.0-M20.jar
 jars/arpack_combined_all-0.1.jar
@@ -33,13 +16,13 @@
 jars/base64-2.3.8.jar
 jars/bcprov-jdk15on-1.51.jar
 jars/bonecp-0.8.0.RELEASE.jar
-jars/breeze-macros_2.11-0.12.jar
-jars/breeze_2.11-0.12.jar
+jars/breeze_2.11-0.13.2.jar
+jars/breeze-macros_2.11-0.13.2.jar
 jars/calcite-avatica-1.2.0-incubating.jar
 jars/calcite-core-1.2.0-incubating.jar
 jars/calcite-linq4j-1.2.0-incubating.jar
-jars/chill-java-0.8.0.jar
 jars/chill_2.11-0.8.0.jar
+jars/chill-java-0.8.0.jar
 jars/commons-beanutils-1.7.0.jar
 jars/commons-beanutils-core-1.8.0.jar
 jars/commons-cli-1.2.jar
@@ -73,21 +56,21 @@
 jars/guava-14.0.1.jar
 jars/guice-3.0.jar
 jars/guice-servlet-3.0.jar
-jars/hadoop-annotations-2.6.4.jar
-jars/hadoop-auth-2.6.4.jar
-jars/hadoop-client-2.6.4.jar
-jars/hadoop-common-2.6.4.jar
-jars/hadoop-hdfs-2.6.4.jar
-jars/hadoop-mapreduce-client-app-2.6.4.jar
-jars/hadoop-mapreduce-client-common-2.6.4.jar
-jars/hadoop-mapreduce-client-core-2.6.4.jar
-jars/hadoop-mapreduce-client-jobclient-2.6.4.jar
-jars/hadoop-mapreduce-client-shuffle-2.6.4.jar
-jars/hadoop-yarn-api-2.6.4.jar
-jars/hadoop-yarn-client-2.6.4.jar
-jars/hadoop-yarn-common-2.6.4.jar
-jars/hadoop-yarn-server-common-2.6.4.jar
-jars/hadoop-yarn-server-web-proxy-2.6.4.jar
+jars/hadoop-annotations-2.6.5.jar
+jars/hadoop-auth-2.6.5.jar
+jars/hadoop-client-2.6.5.jar
+jars/hadoop-common-2.6.5.jar
+jars/hadoop-hdfs-2.6.5.jar
+jars/hadoop-mapreduce-client-app-2.6.5.jar
+jars/hadoop-mapreduce-client-common-2.6.5.jar
+jars/hadoop-mapreduce-client-core-2.6.5.jar
+jars/hadoop-mapreduce-client-jobclient-2.6.5.jar
+jars/hadoop-mapreduce-client-shuffle-2.6.5.jar
+jars/hadoop-yarn-api-2.6.5.jar
+jars/hadoop-yarn-client-2.6.5.jar
+jars/hadoop-yarn-common-2.6.5.jar
+jars/hadoop-yarn-server-common-2.6.5.jar
+jars/hadoop-yarn-server-web-proxy-2.6.5.jar
 jars/hive-beeline-1.2.1.spark2.jar
 jars/hive-cli-1.2.1.spark2.jar
 jars/hive-exec-1.2.1.spark2.jar
@@ -110,11 +93,12 @@
 jars/jackson-module-scala_2.11-2.6.5.jar
 jars/jackson-xc-1.9.13.jar
 jars/janino-3.0.0.jar
-jars/java-xmlbuilder-1.0.jar
+jars/JavaEWAH-0.3.2.jar
 jars/javassist-3.18.1-GA.jar
 jars/javax.annotation-api-1.2.jar
 jars/javax.inject-1.jar
 jars/javax.inject-2.4.0-b34.jar
+jars/java-xmlbuilder-1.0.jar
 jars/javax.servlet-api-3.1.0.jar
 jars/javax.ws.rs-api-2.0.1.jar
 jars/javolution-5.5.1.jar
@@ -148,6 +132,8 @@
 jars/libthrift-0.9.3.jar
 jars/log4j-1.2.17.jar
 jars/lz4-1.3.0.jar
+jars/machinist_2.11-0.6.1.jar
+jars/macro-compat_2.11-1.1.1.jar
 jars/mail-1.4.7.jar
 jars/mesos-1.0.0-shaded-protobuf.jar
 jars/metrics-core-3.1.2.jar
@@ -156,58 +142,60 @@
 jars/metrics-jvm-3.1.2.jar
 jars/minlog-1.3.0.jar
 jars/mx4j-3.0.2.jar
-jars/netty-3.8.0.Final.jar
+jars/netty-3.9.9.Final.jar
 jars/netty-all-4.0.43.Final.jar
 jars/objenesis-2.1.jar
 jars/opencsv-2.3.jar
 jars/oro-2.0.8.jar
 jars/osgi-resource-locator-1.0.1.jar
-jars/paranamer-2.3.jar
-jars/parquet-column-1.8.1.jar

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add IPv6 to Kafka Jumbo brokers

2018-01-23 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405891 )

Change subject: Add IPv6 to Kafka Jumbo brokers
..

Add IPv6 to Kafka Jumbo brokers

Bug: T185262
Change-Id: I7f5c809881d0cf62b04575e071ec74f0595f20ad
---
M modules/role/manifests/kafka/jumbo/broker.pp
1 file changed, 1 insertion(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/91/405891/1

diff --git a/modules/role/manifests/kafka/jumbo/broker.pp 
b/modules/role/manifests/kafka/jumbo/broker.pp
index 2b283d4..f191f25 100644
--- a/modules/role/manifests/kafka/jumbo/broker.pp
+++ b/modules/role/manifests/kafka/jumbo/broker.pp
@@ -5,6 +5,7 @@
 system::role { 'role::kafka::jumbo::broker':
 description => "Kafka Broker in a 'jumbo' Kafka cluster",
 }
+interface::add_ip6_mapped { 'main': }
 
 # Something in labs is including standard.  Only include if not already 
defined.
 if !defined(Class['::standard']) {

-- 
To view, visit https://gerrit.wikimedia.org/r/405891
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I7f5c809881d0cf62b04575e071ec74f0595f20ad
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] analytics...source[master]: [WIP] Add configurable transform function to JSONRefine

2018-01-22 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405800 )

Change subject: [WIP] Add configurable transform function to JSONRefine
..

[WIP] Add configurable transform function to JSONRefine

Bug: T185237
Change-Id: If1272f7d354e94a0a140f71a9135389131c8a1eb
---
A 
refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/ReflectUtils.scala
M 
refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/SparkJsonToHive.scala
A 
refinery-core/src/test/scala/org/wikimedia/analytics/refinery/core/TestReflectUtils.scala
M 
refinery-job/src/main/scala/org/wikimedia/analytics/refinery/job/JsonRefine.scala
4 files changed, 145 insertions(+), 36 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/analytics/refinery/source 
refs/changes/00/405800/1

diff --git 
a/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/ReflectUtils.scala
 
b/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/ReflectUtils.scala
new file mode 100644
index 000..7cfe209
--- /dev/null
+++ 
b/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/ReflectUtils.scala
@@ -0,0 +1,32 @@
+package org.wikimedia.analytics.refinery.core
+
+import scala.reflect.runtime.universe
+
+object ReflectUtils {
+
+/**
+  * Given a fully qualified String package.ObjectName and String method 
name, this
+  * Function will return a scala.reflect.runtime.universe.MethodMirror 
that can be
+  * used for calling the method on the object.  Note that MethodMirror is 
not a direct
+  * reference to the actual method, and as such does not have compile time 
type
+  * and signature checking.  You must ensure that you call the method with 
exactly the
+  * same arguments and types that the method expects, or you will get a 
runtime exception.
+  *
+  * @param moduleName Fully qualified name for an object, e.g. 
org.wikimedia.analytics.refinery.core.DeduplicateEventLogging
+  * @param methodName Name of method in the object.  Default "apply".
+  * @return
+  */
+def getStaticMethodMirror(moduleName: String, methodName: String = 
"apply"): universe.MethodMirror = {
+val mirror = universe.runtimeMirror(getClass.getClassLoader)
+val module = mirror.staticModule(moduleName)
+val method = 
module.typeSignature.member(universe.newTermName(methodName)).asMethod
+val methodMirror = 
mirror.reflect(mirror.reflectModule(module).instance).reflectMethod(method)
+if (!methodMirror.symbol.isMethod || !methodMirror.symbol.isStatic) {
+throw new RuntimeException(
+s"Cannot get static method for $moduleName.$methodName, it is 
not a static method"
+)
+}
+methodMirror
+}
+
+}
diff --git 
a/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/SparkJsonToHive.scala
 
b/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/SparkJsonToHive.scala
index 8369104..dfb415c 100644
--- 
a/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/SparkJsonToHive.scala
+++ 
b/refinery-core/src/main/scala/org/wikimedia/analytics/refinery/core/SparkJsonToHive.scala
@@ -4,9 +4,6 @@
 
 import scala.util.control.Exception.{allCatch, ignoring}
 
-import org.apache.hadoop.fs.Path
-
-
 import org.apache.hadoop.hive.metastore.api.AlreadyExistsException
 
 import org.apache.spark.sql.SQLContext
@@ -19,7 +16,6 @@
 // This allows us use these types with an extendend API
 // that includes schema merging and Hive DDL statement generation.
 import SparkSQLHiveExtensions._
-
 
 /**
   * Converts arbitrary JSON to Hive Parquet by 'evolving' the Hive table to
@@ -66,29 +62,37 @@
   * Reads inputPath as JSON data, creates or alters tableName in Hive to 
match the inferred
   * schema of the input JSON data, and then inserts the data into the 
table.
   *
-  * @param hiveContext  Spark HiveContext
+  * @param hiveContext   Spark HiveContext
   *
-  * @param inputPathPath to JSON data
+  * @param inputPath Path to JSON data
   *
   *
-  * @param partitionHivePartition.  This helper class contains
-  * database and table name, as well as external 
location
-  * and partition keys and values.
+  * @param partition HivePartition.  This helper class contains
+  *  database and table name, as well as external 
location
+  *  and partition keys and values.
   *
-  * @param isSequenceFile   If true, inputPath is expected to contain JSON 
in
-  * Hadoop Sequence Files, else JSON in text files.
+  * @param isSequenceFileIf true, inputPath is expected to contain 
JSON in
+  *  Hadoop Sequence Files, else JSON in text 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add druid defaults for easier setup in Cloud VPS

2018-01-18 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/405021 )

Change subject: Add druid defaults for easier setup in Cloud VPS
..


Add druid defaults for easier setup in Cloud VPS

Should be no-op in prod.

Change-Id: I211a795b2acde192a090e89976ccd9ad3ce8d1a0
---
M modules/profile/manifests/druid/broker.pp
M modules/profile/manifests/druid/common.pp
M modules/profile/manifests/druid/coordinator.pp
M modules/profile/manifests/druid/historical.pp
M modules/profile/manifests/druid/middlemanager.pp
M modules/profile/manifests/druid/overlord.pp
6 files changed, 29 insertions(+), 29 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/druid/broker.pp 
b/modules/profile/manifests/druid/broker.pp
index 7ab39a4..6a799e2 100644
--- a/modules/profile/manifests/druid/broker.pp
+++ b/modules/profile/manifests/druid/broker.pp
@@ -6,11 +6,11 @@
 # haver finer control over how Druid accepts queries.
 #
 class profile::druid::broker(
-$properties = hiera('profile::druid::broker::properties'),
-$env= hiera('profile::druid::broker::env'),
-$ferm_srange= hiera('profile::druid::broker::ferm_srange'),
-$daemon_autoreload  = hiera('profile::druid::daemons_autoreload'),
-$monitoring_enabled = hiera('profile::druid::broker::monitoring_enabled'),
+$properties = hiera('profile::druid::broker::properties', {}),
+$env= hiera('profile::druid::broker::env', {}),
+$ferm_srange= hiera('profile::druid::broker::ferm_srange', 
'$DOMAIN_NETWORKS'),
+$daemon_autoreload  = hiera('profile::druid::daemons_autoreload', true),
+$monitoring_enabled = hiera('profile::druid::broker::monitoring_enabled', 
false),
 ) {
 
 require ::profile::druid::common
diff --git a/modules/profile/manifests/druid/common.pp 
b/modules/profile/manifests/druid/common.pp
index 59a88d4..864977d 100644
--- a/modules/profile/manifests/druid/common.pp
+++ b/modules/profile/manifests/druid/common.pp
@@ -12,11 +12,11 @@
 class profile::druid::common(
 $druid_cluster_name = 
hiera('profile::druid::common::druid_cluster_name'),
 $zookeeper_cluster_name = 
hiera('profile::druid::common::zookeeper_cluster_name'),
-$private_properties = 
hiera('profile::druid::common::private_properties'),
-$properties = 
hiera('profile::druid::common::properties'),
+$private_properties = 
hiera('profile::druid::common::private_properties', {}),
+$properties = 
hiera('profile::druid::common::properties', {}),
 $zookeeper_clusters = hiera('zookeeper_clusters'),
-$metadata_storage_database_name = 
hiera('profile::druid::common:metadata_storage_database_name'),
-$use_cdh= hiera('profile::druid::common::use_cdh'),
+$metadata_storage_database_name = 
hiera('profile::druid::common:metadata_storage_database_name', 'druid'),
+$use_cdh= hiera('profile::druid::common::use_cdh', 
false),
 ) {
 # Need Java before Druid is installed.
 require ::profile::java::analytics
diff --git a/modules/profile/manifests/druid/coordinator.pp 
b/modules/profile/manifests/druid/coordinator.pp
index 67ce71a..01e2efa 100644
--- a/modules/profile/manifests/druid/coordinator.pp
+++ b/modules/profile/manifests/druid/coordinator.pp
@@ -6,11 +6,11 @@
 # haver finer control over how Druid accepts queries.
 #
 class profile::druid::coordinator(
-$properties = hiera('profile::druid::coordinator::properties'),
-$env= hiera('profile::druid::coordinator::env'),
-$daemon_autoreload  = hiera('profile::druid::daemons_autoreload'),
-$ferm_srange= hiera('profile::druid::coordinator::ferm_srange'),
-$monitoring_enabled = 
hiera('profile::druid::coordinator::monitoring_enabled'),
+$properties = hiera('profile::druid::coordinator::properties', {}),
+$env= hiera('profile::druid::coordinator::env', {}),
+$daemon_autoreload  = hiera('profile::druid::daemons_autoreload', true),
+$ferm_srange= hiera('profile::druid::coordinator::ferm_srange', 
'$DOMAIN_NETWORKS'),
+$monitoring_enabled = 
hiera('profile::druid::coordinator::monitoring_enabled', false),
 ) {
 
 require ::profile::druid::common
diff --git a/modules/profile/manifests/druid/historical.pp 
b/modules/profile/manifests/druid/historical.pp
index 1663278..e9fd88e 100644
--- a/modules/profile/manifests/druid/historical.pp
+++ b/modules/profile/manifests/druid/historical.pp
@@ -1,11 +1,11 @@
 # Class: profile::druid::historical
 #
 class profile::druid::historical(
-$properties = hiera('profile::druid::historical::properties'),
-$env= hiera('profile::druid::historical::env'),
-

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add druid defaults for easier setup in Cloud VPS

2018-01-18 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405021 )

Change subject: Add druid defaults for easier setup in Cloud VPS
..

Add druid defaults for easier setup in Cloud VPS

Should be no-op in prod.

Change-Id: I211a795b2acde192a090e89976ccd9ad3ce8d1a0
---
M modules/profile/manifests/druid/broker.pp
M modules/profile/manifests/druid/common.pp
M modules/profile/manifests/druid/coordinator.pp
M modules/profile/manifests/druid/historical.pp
M modules/profile/manifests/druid/middlemanager.pp
M modules/profile/manifests/druid/overlord.pp
6 files changed, 29 insertions(+), 29 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/21/405021/1

diff --git a/modules/profile/manifests/druid/broker.pp 
b/modules/profile/manifests/druid/broker.pp
index 7ab39a4..6a799e2 100644
--- a/modules/profile/manifests/druid/broker.pp
+++ b/modules/profile/manifests/druid/broker.pp
@@ -6,11 +6,11 @@
 # haver finer control over how Druid accepts queries.
 #
 class profile::druid::broker(
-$properties = hiera('profile::druid::broker::properties'),
-$env= hiera('profile::druid::broker::env'),
-$ferm_srange= hiera('profile::druid::broker::ferm_srange'),
-$daemon_autoreload  = hiera('profile::druid::daemons_autoreload'),
-$monitoring_enabled = hiera('profile::druid::broker::monitoring_enabled'),
+$properties = hiera('profile::druid::broker::properties', {}),
+$env= hiera('profile::druid::broker::env', {}),
+$ferm_srange= hiera('profile::druid::broker::ferm_srange', 
'$DOMAIN_NETWORKS'),
+$daemon_autoreload  = hiera('profile::druid::daemons_autoreload', true),
+$monitoring_enabled = hiera('profile::druid::broker::monitoring_enabled', 
false),
 ) {
 
 require ::profile::druid::common
diff --git a/modules/profile/manifests/druid/common.pp 
b/modules/profile/manifests/druid/common.pp
index 59a88d4..864977d 100644
--- a/modules/profile/manifests/druid/common.pp
+++ b/modules/profile/manifests/druid/common.pp
@@ -12,11 +12,11 @@
 class profile::druid::common(
 $druid_cluster_name = 
hiera('profile::druid::common::druid_cluster_name'),
 $zookeeper_cluster_name = 
hiera('profile::druid::common::zookeeper_cluster_name'),
-$private_properties = 
hiera('profile::druid::common::private_properties'),
-$properties = 
hiera('profile::druid::common::properties'),
+$private_properties = 
hiera('profile::druid::common::private_properties', {}),
+$properties = 
hiera('profile::druid::common::properties', {}),
 $zookeeper_clusters = hiera('zookeeper_clusters'),
-$metadata_storage_database_name = 
hiera('profile::druid::common:metadata_storage_database_name'),
-$use_cdh= hiera('profile::druid::common::use_cdh'),
+$metadata_storage_database_name = 
hiera('profile::druid::common:metadata_storage_database_name', 'druid'),
+$use_cdh= hiera('profile::druid::common::use_cdh', 
false),
 ) {
 # Need Java before Druid is installed.
 require ::profile::java::analytics
diff --git a/modules/profile/manifests/druid/coordinator.pp 
b/modules/profile/manifests/druid/coordinator.pp
index 67ce71a..01e2efa 100644
--- a/modules/profile/manifests/druid/coordinator.pp
+++ b/modules/profile/manifests/druid/coordinator.pp
@@ -6,11 +6,11 @@
 # haver finer control over how Druid accepts queries.
 #
 class profile::druid::coordinator(
-$properties = hiera('profile::druid::coordinator::properties'),
-$env= hiera('profile::druid::coordinator::env'),
-$daemon_autoreload  = hiera('profile::druid::daemons_autoreload'),
-$ferm_srange= hiera('profile::druid::coordinator::ferm_srange'),
-$monitoring_enabled = 
hiera('profile::druid::coordinator::monitoring_enabled'),
+$properties = hiera('profile::druid::coordinator::properties', {}),
+$env= hiera('profile::druid::coordinator::env', {}),
+$daemon_autoreload  = hiera('profile::druid::daemons_autoreload', true),
+$ferm_srange= hiera('profile::druid::coordinator::ferm_srange', 
'$DOMAIN_NETWORKS'),
+$monitoring_enabled = 
hiera('profile::druid::coordinator::monitoring_enabled', false),
 ) {
 
 require ::profile::druid::common
diff --git a/modules/profile/manifests/druid/historical.pp 
b/modules/profile/manifests/druid/historical.pp
index 1663278..e9fd88e 100644
--- a/modules/profile/manifests/druid/historical.pp
+++ b/modules/profile/manifests/druid/historical.pp
@@ -1,11 +1,11 @@
 # Class: profile::druid::historical
 #
 class profile::druid::historical(
-$properties = hiera('profile::druid::historical::properties'),
-$env= hiera('profile::druid::historical::env'),
-  

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use jumbo Kafka for EventStreams in deployment-prep

2018-01-18 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/405014 )

Change subject: Use jumbo Kafka for EventStreams in deployment-prep
..


Use jumbo Kafka for EventStreams in deployment-prep

No-op in prod.

Bug: T185225
Change-Id: I05b13521a9f7086733983d426a8fa89d857547c9
---
M hieradata/labs/deployment-prep/common.yaml
M modules/profile/manifests/eventstreams.pp
2 files changed, 3 insertions(+), 4 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/hieradata/labs/deployment-prep/common.yaml 
b/hieradata/labs/deployment-prep/common.yaml
index 7700115..ba33786 100644
--- a/hieradata/labs/deployment-prep/common.yaml
+++ b/hieradata/labs/deployment-prep/common.yaml
@@ -362,7 +362,7 @@
 profile::recommendation_api::wdqs_uri: http://wdqs-test.wmflabs.org
 
 # Eventstreams config
-profile::eventstreams::kafka_cluster_name: main
+profile::eventstreams::kafka_cluster_name: jumbo
 profile::eventstreams::streams:
   test:
 topics: ["%{::site}.test.event"]
@@ -370,7 +370,6 @@
 topics: ["%{::site}.mediawiki.revision-create"]
   recentchange:
 topics: ["%{::site}.mediawiki.recentchange"]
-profile::eventstreams::rdkafka_config: {}
 
 cache::be_transient_gb: 0
 cache::fe_transient_gb: 0
diff --git a/modules/profile/manifests/eventstreams.pp 
b/modules/profile/manifests/eventstreams.pp
index d340984..92a618e 100644
--- a/modules/profile/manifests/eventstreams.pp
+++ b/modules/profile/manifests/eventstreams.pp
@@ -34,8 +34,8 @@
 # filtertags: labs-project-deployment-prep
 class profile::eventstreams(
 $kafka_cluster_name = hiera('profile::eventstreams::kafka_cluster_name'),
-$streams = hiera('profile::eventstreams::streams'),
-$rdkafka_config = hiera('profile::eventstreams::rdkafka_config')
+$streams= hiera('profile::eventstreams::streams'),
+$rdkafka_config = hiera('profile::eventstreams::rdkafka_config', {})
 ) {
 $kafka_config = kafka_config($kafka_cluster_name)
 $broker_list = $kafka_config['brokers']['string']

-- 
To view, visit https://gerrit.wikimedia.org/r/405014
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I05b13521a9f7086733983d426a8fa89d857547c9
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use jumbo Kafka for EventStreams in deployment-prep

2018-01-18 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/405014 )

Change subject: Use jumbo Kafka for EventStreams in deployment-prep
..

Use jumbo Kafka for EventStreams in deployment-prep

No-op in prod.

Bug: T185225
Change-Id: I05b13521a9f7086733983d426a8fa89d857547c9
---
M hieradata/labs/deployment-prep/common.yaml
M modules/profile/manifests/eventstreams.pp
2 files changed, 3 insertions(+), 4 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/14/405014/1

diff --git a/hieradata/labs/deployment-prep/common.yaml 
b/hieradata/labs/deployment-prep/common.yaml
index 7700115..ba33786 100644
--- a/hieradata/labs/deployment-prep/common.yaml
+++ b/hieradata/labs/deployment-prep/common.yaml
@@ -362,7 +362,7 @@
 profile::recommendation_api::wdqs_uri: http://wdqs-test.wmflabs.org
 
 # Eventstreams config
-profile::eventstreams::kafka_cluster_name: main
+profile::eventstreams::kafka_cluster_name: jumbo
 profile::eventstreams::streams:
   test:
 topics: ["%{::site}.test.event"]
@@ -370,7 +370,6 @@
 topics: ["%{::site}.mediawiki.revision-create"]
   recentchange:
 topics: ["%{::site}.mediawiki.recentchange"]
-profile::eventstreams::rdkafka_config: {}
 
 cache::be_transient_gb: 0
 cache::fe_transient_gb: 0
diff --git a/modules/profile/manifests/eventstreams.pp 
b/modules/profile/manifests/eventstreams.pp
index d340984..92a618e 100644
--- a/modules/profile/manifests/eventstreams.pp
+++ b/modules/profile/manifests/eventstreams.pp
@@ -34,8 +34,8 @@
 # filtertags: labs-project-deployment-prep
 class profile::eventstreams(
 $kafka_cluster_name = hiera('profile::eventstreams::kafka_cluster_name'),
-$streams = hiera('profile::eventstreams::streams'),
-$rdkafka_config = hiera('profile::eventstreams::rdkafka_config')
+$streams= hiera('profile::eventstreams::streams'),
+$rdkafka_config = hiera('profile::eventstreams::rdkafka_config', {})
 ) {
 $kafka_config = kafka_config($kafka_cluster_name)
 $broker_list = $kafka_config['brokers']['string']

-- 
To view, visit https://gerrit.wikimedia.org/r/405014
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I05b13521a9f7086733983d426a8fa89d857547c9
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] mediawiki/vagrant[master]: Update Kafka to 1.0 with SSL support

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404870 )

Change subject: Update Kafka to 1.0 with SSL support
..

Update Kafka to 1.0 with SSL support

This will make testing Mediawiki integration with Kafka and SSL easier

Bug: T126494
Change-Id: I93d7c7cb98664e3e41b5a383ba8f9976a0b09099
---
M puppet/modules/kafka/files/kafka.profile.sh
M puppet/modules/kafka/files/kafka.sh
D puppet/modules/kafka/files/server.properties
A puppet/modules/kafka/files/ssl/kafka_broker/ca.crt.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.crt.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.csr.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.key.private.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.key.public.pem
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.keystore.jks
A puppet/modules/kafka/files/ssl/kafka_broker/kafka_broker.keystore.p12
A puppet/modules/kafka/files/ssl/kafka_broker/truststore.jks
A puppet/modules/kafka/files/ssl/local_ca/ca.crt.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.crt.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.csr.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.key.private.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.key.public.pem
A puppet/modules/kafka/files/ssl/local_ca/local_ca.keystore.jks
A puppet/modules/kafka/files/ssl/local_ca/local_ca.keystore.p12
A puppet/modules/kafka/files/ssl/local_ca/truststore.jks
A puppet/modules/kafka/files/ssl/test0/ca.crt.pem
A puppet/modules/kafka/files/ssl/test0/test0.crt.pem
A puppet/modules/kafka/files/ssl/test0/test0.csr.pem
A puppet/modules/kafka/files/ssl/test0/test0.key.private.pem
A puppet/modules/kafka/files/ssl/test0/test0.key.public.pem
A puppet/modules/kafka/files/ssl/test0/test0.keystore.jks
A puppet/modules/kafka/files/ssl/test0/test0.keystore.p12
A puppet/modules/kafka/files/ssl/test0/truststore.jks
M puppet/modules/kafka/manifests/init.pp
A puppet/modules/kafka/templates/server.properties.erb
M puppet/modules/kafka/templates/systemd/kafka.erb
30 files changed, 418 insertions(+), 119 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/mediawiki/vagrant 
refs/changes/70/404870/1

diff --git a/puppet/modules/kafka/files/kafka.profile.sh 
b/puppet/modules/kafka/files/kafka.profile.sh
index ab3ed80..f1f2a8a 100644
--- a/puppet/modules/kafka/files/kafka.profile.sh
+++ b/puppet/modules/kafka/files/kafka.profile.sh
@@ -3,5 +3,6 @@
 # These environment variables are used by the kafka CLI
 # so that you don't have to provide them as arguments
 # every time you use it.
-export ZOOKEEPER_URL=localhost:2181
-export BROKER_LIST=localhost:9092
+export KAFKA_ZOOKEEPER_URL=localhost:2181/kafka
+export KAFKA_BOOTSTRAP_SERVERS=localhost:9092
+export KAFKA_JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
diff --git a/puppet/modules/kafka/files/kafka.sh 
b/puppet/modules/kafka/files/kafka.sh
index e7db2bb..e2c1c8b 100755
--- a/puppet/modules/kafka/files/kafka.sh
+++ b/puppet/modules/kafka/files/kafka.sh
@@ -1,5 +1,7 @@
 #!/bin/bash
 
+# NOTE: This file is managed by Puppet.
+
 SCRIPT_NAME=$(basename "$0")
 
 commands=$(ls /usr/bin/kafka-* | xargs -n 1 basename | sed 's@kafka-@  @g')
@@ -8,9 +10,9 @@
 $SCRIPT_NAME  [options]
 
 Handy wrapper around various kafka-* scripts.  Set the environment variables
-ZOOKEEPER_URL and BROKER_LIST so you don't have to keep typing
---zookeeper-connect or --broker-list each time you want to use a kafka-*
-script.
+KAFKA_ZOOKEEPER_URL, KAFKA_BOOTSTRAP_SERVERS so you don't have to keep typing
+--zookeeper-connect, --broker-list or --bootstrap-server each time you want to
+use a kafka-* script.
 
 Usage:
 
@@ -20,11 +22,18 @@
 $commands
 
 Environment Variables:
-  ZOOKEEPER_URL - If this is set, any commands that take a --zookeeper flag 
will be given this value.
-  BROKER_LIST   - If this is set, any commands that take a --broker-list flag 
will be given this value.
+  KAFKA_JAVA_HOME - Value of JAVA_HOME to use for invoking Kafka 
commands.
+  KAFKA_ZOOKEEPER_URL - If this is set, any commands that take a 
--zookeeper
+flag will be given this value.
+  KAFKA_BOOTSTRAP_SERVERS - If this is set, any commands that take a 
--broker-list or
+--bootstrap-server flag will be given this value.
+Also any command that take a 
--authorizer-properties
+will get the correct zookeeper.connect value.
+
 "
 
-if [ -z "${1}" -o ${1:0:1} == '-' ]; then
+# Print usage if no  given, or $1 starts with '-'
+if [ -z "${1}" -o "${1:0:1}" == '-' ]; then
 echo "${USAGE}"
 exit 1
 fi
@@ -33,43 +42,77 @@
 command="kafka-${1}"
 shift
 
+# Export JAVA_HOME as KAFKA_JAVA_HOME if it is set.
+# This makes kafka-run-class use the preferred JAVA_HOME for Kafka.
+if [ -n "${KAFKA_JAVA_HOME}" ]; then
+: 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: No-op for refinery job camus to ease future analytics -> jum...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404789 )

Change subject: No-op for refinery job camus to ease future analytics -> jumbo 
kafka
..


No-op for refinery job camus to ease future analytics -> jumbo kafka

Bug: T175461
Change-Id: I65385d2d6970aa6971436e6d0aebde678fbc5648
---
M modules/profile/manifests/analytics/refinery/job/camus.pp
1 file changed, 24 insertions(+), 11 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/analytics/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
index a7f923d..3bbc87e 100644
--- a/modules/profile/manifests/analytics/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -8,12 +8,17 @@
 #   to look up brokers from which Camus will import data.  Default: analytics
 #
 class profile::analytics::refinery::job::camus(
-$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 
'analytics')
+$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 'jumbo')
 ) {
 require ::profile::hadoop::common
 require ::profile::analytics::refinery
 
-$kafka_config = kafka_config($kafka_cluster_name)
+$kafka_config  = kafka_config($kafka_cluster_name)
+$kafka_brokers = suffix($kafka_config['brokers']['array'], ':9092')
+
+# Temporary while we migrate camus jobs over to new kafka cluster.
+$kafka_config_analytics  = kafka_config('analytics')
+$kafka_brokers_analytics = 
suffix($kafka_config_analytics['brokers']['array'], ':9092')
 
 # Make all uses of camus::job set default kafka_brokers and camus_jar.
 # If you build a new camus or refinery, and you want to use it, you'll
@@ -22,7 +27,7 @@
 # the camus::job declaration.
 Camus::Job {
 script => "export 
PYTHONPATH=\${PYTHONPATH}:${profile::analytics::refinery::path}/python && 
${profile::analytics::refinery::path}/bin/camus",
-kafka_brokers  => suffix($kafka_config['brokers']['array'], 
':9092'),
+kafka_brokers  => $kafka_brokers,
 camus_jar  => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
 check_jar  => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
 template_variables => {
@@ -30,38 +35,46 @@
 }
 }
 
+
 # Import webrequest_* topics into /wmf/data/raw/webrequest
 # every 10 minutes, check runs and flag fully imported hours.
 camus::job { 'webrequest':
-check  => true,
-minute => '*/10',
+check => true,
+minute=> '*/10',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import eventlogging_* topics into /wmf/data/raw/eventlogging
 # once every hour.
 camus::job { 'eventlogging':
-minute => '5',
+minute=> '5',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import eventbus topics into /wmf/data/raw/eventbus
 # once every hour.
 camus::job { 'eventbus':
-minute => '5',
+minute=> '5',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import mediawiki_* topics into /wmf/data/raw/mediawiki
 # once every hour.  This data is expected to be Avro binary.
 camus::job { 'mediawiki':
-check   => true,
-minute  => '15',
+check => true,
+minute=> '15',
 # refinery-camus contains some custom decoder classes which
 # are needed to import Avro binary data.
-libjars => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.28.jar",
+libjars   => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.28.jar",
+kafka_brokers => $kafka_brokers_analytics,
 }
+
 
 # Import eventbus mediawiki.job queue topics into 
/wmf/data/raw/mediawiki_job
 # once every hour.
 camus::job { 'mediawiki_job':
-minute => '10',
+minute=> '10',
+kafka_brokers => $kafka_brokers_analytics,
 }
+
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/404789
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I65385d2d6970aa6971436e6d0aebde678fbc5648
Gerrit-PatchSet: 3
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org

[MediaWiki-commits] [Gerrit] operations/puppet[production]: No-op for refinery job camus to ease future analytics -> jum...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404789 )

Change subject: No-op for refinery job camus to ease future analytics -> jumbo 
kafka
..

No-op for refinery job camus to ease future analytics -> jumbo kafka

Bug: T175461
Change-Id: I65385d2d6970aa6971436e6d0aebde678fbc5648
---
M modules/profile/manifests/analytics/refinery/job/camus.pp
1 file changed, 14 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/89/404789/1

diff --git a/modules/profile/manifests/analytics/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
index a7f923d..8f13a55 100644
--- a/modules/profile/manifests/analytics/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -8,12 +8,17 @@
 #   to look up brokers from which Camus will import data.  Default: analytics
 #
 class profile::analytics::refinery::job::camus(
-$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 
'analytics')
+$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 'jumbo')
 ) {
 require ::profile::hadoop::common
 require ::profile::analytics::refinery
 
-$kafka_config = kafka_config($kafka_cluster_name)
+$kafka_config  = kafka_config($kafka_cluster_name)
+$kafka_brokers = suffix($kafka_config['brokers']['array'], ':9092')
+
+# Temporary while we migrate camus jobs over to new kafka cluster.
+$kafka_config_analytics  = kafka_config('analytics')
+$kafka_brokers_analytics = 
suffix($kafka_config_analytics['brokers']['array'], ':9092'),
 
 # Make all uses of camus::job set default kafka_brokers and camus_jar.
 # If you build a new camus or refinery, and you want to use it, you'll
@@ -22,7 +27,7 @@
 # the camus::job declaration.
 Camus::Job {
 script => "export 
PYTHONPATH=\${PYTHONPATH}:${profile::analytics::refinery::path}/python && 
${profile::analytics::refinery::path}/bin/camus",
-kafka_brokers  => suffix($kafka_config['brokers']['array'], 
':9092'),
+kafka_brokers  => $kafka_brokers,
 camus_jar  => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
 check_jar  => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
 template_variables => {
@@ -30,23 +35,27 @@
 }
 }
 
+
 # Import webrequest_* topics into /wmf/data/raw/webrequest
 # every 10 minutes, check runs and flag fully imported hours.
 camus::job { 'webrequest':
 check  => true,
 minute => '*/10',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import eventlogging_* topics into /wmf/data/raw/eventlogging
 # once every hour.
 camus::job { 'eventlogging':
 minute => '5',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import eventbus topics into /wmf/data/raw/eventbus
 # once every hour.
 camus::job { 'eventbus':
 minute => '5',
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import mediawiki_* topics into /wmf/data/raw/mediawiki
@@ -57,11 +66,13 @@
 # refinery-camus contains some custom decoder classes which
 # are needed to import Avro binary data.
 libjars => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.28.jar",
+kafka_brokers => $kafka_brokers_analytics,
 }
 
 # Import eventbus mediawiki.job queue topics into 
/wmf/data/raw/mediawiki_job
 # once every hour.
 camus::job { 'mediawiki_job':
 minute => '10',
+kafka_brokers => $kafka_brokers_analytics,
 }
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/404789
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I65385d2d6970aa6971436e6d0aebde678fbc5648
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: [WIP] point eventlogging processes at Kafka jumbo

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404773 )

Change subject: [WIP] point eventlogging processes at Kafka jumbo
..

[WIP] point eventlogging processes at Kafka jumbo

This needs to be merged with https://gerrit.wikimedia.org/r/#/c/403067/

It is in gerrit now to be cherry-picked in deployment-prep

Bug: T183297
Change-Id: Iaf6f898b58a6564d2b22dce88ececfb415dc232e
---
M modules/role/manifests/eventlogging/analytics/files.pp
M modules/role/manifests/eventlogging/analytics/mysql.pp
M modules/role/manifests/eventlogging/analytics/processor.pp
M modules/role/manifests/eventlogging/analytics/server.pp
4 files changed, 6 insertions(+), 36 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/73/404773/1

diff --git a/modules/role/manifests/eventlogging/analytics/files.pp 
b/modules/role/manifests/eventlogging/analytics/files.pp
index 0619e93..bf5e9e9 100644
--- a/modules/role/manifests/eventlogging/analytics/files.pp
+++ b/modules/role/manifests/eventlogging/analytics/files.pp
@@ -45,16 +45,6 @@
 'eventlogging_consumer_files_00'
 )
 
-# Where possible, if this is set, it will be included in client 
configuration
-# to avoid having to do API version for Kafka < 0.10 (where there is not a 
version API).
-$kafka_api_version = 
$role::eventlogging::analytics::server::kafka_config['api_version']
-
-# Append this to query params if set.
-$kafka_api_version_param = $kafka_api_version ? {
-undef   => '',
-default => "_version=${kafka_api_version}"
-}
-
 # These commonly used URIs are defined for DRY purposes in
 # role::eventlogging::analytics::server.
 $kafka_client_side_raw_uri = 
$role::eventlogging::analytics::server::kafka_client_side_raw_uri
@@ -62,7 +52,7 @@
 
 # Raw client side events:
 eventlogging::service::consumer { 'client-side-events.log':
-input  => 
"${kafka_client_side_raw_uri}=True${kafka_api_version_param}",
+input  => "${kafka_client_side_raw_uri}=True",
 output => "file://${out_dir}/client-side-events.log",
 sid=> $kafka_consumer_group,
 }
@@ -71,7 +61,7 @@
 # 'blacklisted' during processing.  Events are blacklisted
 # from these logs for volume reasons.
 eventlogging::service::consumer { 'all-events.log':
-input  =>  "${kafka_mixed_uri}${kafka_api_version_param}",
+input  => $kafka_mixed_uri,
 output => "file://${out_dir}/all-events.log",
 sid=> $kafka_consumer_group,
 }
diff --git a/modules/role/manifests/eventlogging/analytics/mysql.pp 
b/modules/role/manifests/eventlogging/analytics/mysql.pp
index cac5874..081f838 100644
--- a/modules/role/manifests/eventlogging/analytics/mysql.pp
+++ b/modules/role/manifests/eventlogging/analytics/mysql.pp
@@ -32,16 +32,6 @@
 ['mysql-m4-master-00']
 )
 
-# Where possible, if this is set, it will be included in client 
configuration
-# to avoid having to do API version for Kafka < 0.10 (where there is not a 
version API).
-$kafka_api_version = 
$role::eventlogging::analytics::server::kafka_config['api_version']
-
-# Append this to query params if set.
-$kafka_api_version_param = $kafka_api_version ? {
-undef   => '',
-default => "_version=${kafka_api_version}"
-}
-
 $kafka_consumer_scheme = 
$role::eventlogging::analytics::server::kafka_consumer_scheme
 $kafka_brokers_string  = 
$role::eventlogging::analytics::server::kafka_brokers_string
 
@@ -68,7 +58,7 @@
 # Kafka consumer group for this consumer is mysql-m4-master
 eventlogging::service::consumer { $mysql_consumers:
 # auto commit offsets to kafka more often for mysql consumer
-input  => 
"${map_scheme}${kafka_consumer_uri}_commit_interval_ms=1000${kafka_api_version_param}${map_function}",
+input  => 
"${map_scheme}${kafka_consumer_uri}_commit_interval_ms=1000${map_function}",
 output => 
"mysql://${mysql_user}:${mysql_pass}@${mysql_db}?charset=utf8_host=${statsd_host}=True",
 sid=> 'eventlogging_consumer_mysql_00',
 # Restrict permissions on this config file since it contains a 
password.
diff --git a/modules/role/manifests/eventlogging/analytics/processor.pp 
b/modules/role/manifests/eventlogging/analytics/processor.pp
index dbaf521..cb15c0c 100644
--- a/modules/role/manifests/eventlogging/analytics/processor.pp
+++ b/modules/role/manifests/eventlogging/analytics/processor.pp
@@ -6,9 +6,6 @@
 class role::eventlogging::analytics::processor{
 include role::eventlogging::analytics::server
 
-# Where possible, if this is set, it will be included in client 
configuration
-# to avoid having to do API version for Kafka < 0.10 (where there is not a 
version API).
-$kafka_api_version = 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use log_retention params in profile::kafka::broker

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404747 )

Change subject: Use log_retention params in profile::kafka::broker
..


Use log_retention params in profile::kafka::broker

I want to reduce the number of log bytes we keep in deployment-prep

Change-Id: I5a46d2d831f309dfe8f61023875043c1c4d5e6eb
---
M modules/profile/manifests/kafka/broker.pp
1 file changed, 6 insertions(+), 0 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/kafka/broker.pp 
b/modules/profile/manifests/kafka/broker.pp
index 8596c83..021038b 100644
--- a/modules/profile/manifests/kafka/broker.pp
+++ b/modules/profile/manifests/kafka/broker.pp
@@ -81,6 +81,9 @@
 # [*log_retention_hours*]
 #   Hiera: profile::kafka::broker::log_retention_hours  Default: 168 (1 week)
 #
+# [*log_retention_bytes*]
+#   Hiera: profile::kafka::broker::log_retention_bytes Default: undef
+#
 # [*num_recovery_threads_per_data_dir*]
 #   Hiera: profile::kafka::broker::num_recovery_threads_per_data_dir  Default 
undef
 #
@@ -120,6 +123,7 @@
 $log_dirs  = 
hiera('profile::kafka::broker::log_dirs', ['/srv/kafka/data']),
 $auto_leader_rebalance_enable  = 
hiera('profile::kafka::broker::auto_leader_rebalance_enable', true),
 $log_retention_hours   = 
hiera('profile::kafka::broker::log_retention_hours', 168),
+$log_retention_bytes   = 
hiera('profile::kafka::broker::log_retention_bytes', undef),
 $num_recovery_threads_per_data_dir = 
hiera('profile::kafka::broker::num_recovery_threads_per_data_dir', undef),
 $num_io_threads= 
hiera('profile::kafka::broker::num_io_threads', 1),
 $num_replica_fetchers  = 
hiera('profile::kafka::broker::num_replica_fetchers', undef),
@@ -300,6 +304,8 @@
 ssl_enabled_protocols=> $ssl_enabled_protocols,
 ssl_cipher_suites=> $ssl_cipher_suites,
 
+log_retention_hours  => $log_retention_hours,
+log_retention_bytes  => $log_retention_bytes,
 auto_leader_rebalance_enable => $auto_leader_rebalance_enable,
 num_replica_fetchers => $num_replica_fetchers,
 message_max_bytes=> $message_max_bytes,

-- 
To view, visit https://gerrit.wikimedia.org/r/404747
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I5a46d2d831f309dfe8f61023875043c1c4d5e6eb
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use log_retention params in profile::kafka::broker

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404747 )

Change subject: Use log_retention params in profile::kafka::broker
..

Use log_retention params in profile::kafka::broker

I want to reduce the number of log bytes we keep in deployment-prep

Change-Id: I5a46d2d831f309dfe8f61023875043c1c4d5e6eb
---
M modules/profile/manifests/kafka/broker.pp
1 file changed, 6 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/47/404747/1

diff --git a/modules/profile/manifests/kafka/broker.pp 
b/modules/profile/manifests/kafka/broker.pp
index 8596c83..021038b 100644
--- a/modules/profile/manifests/kafka/broker.pp
+++ b/modules/profile/manifests/kafka/broker.pp
@@ -81,6 +81,9 @@
 # [*log_retention_hours*]
 #   Hiera: profile::kafka::broker::log_retention_hours  Default: 168 (1 week)
 #
+# [*log_retention_bytes*]
+#   Hiera: profile::kafka::broker::log_retention_bytes Default: undef
+#
 # [*num_recovery_threads_per_data_dir*]
 #   Hiera: profile::kafka::broker::num_recovery_threads_per_data_dir  Default 
undef
 #
@@ -120,6 +123,7 @@
 $log_dirs  = 
hiera('profile::kafka::broker::log_dirs', ['/srv/kafka/data']),
 $auto_leader_rebalance_enable  = 
hiera('profile::kafka::broker::auto_leader_rebalance_enable', true),
 $log_retention_hours   = 
hiera('profile::kafka::broker::log_retention_hours', 168),
+$log_retention_bytes   = 
hiera('profile::kafka::broker::log_retention_bytes', undef),
 $num_recovery_threads_per_data_dir = 
hiera('profile::kafka::broker::num_recovery_threads_per_data_dir', undef),
 $num_io_threads= 
hiera('profile::kafka::broker::num_io_threads', 1),
 $num_replica_fetchers  = 
hiera('profile::kafka::broker::num_replica_fetchers', undef),
@@ -300,6 +304,8 @@
 ssl_enabled_protocols=> $ssl_enabled_protocols,
 ssl_cipher_suites=> $ssl_cipher_suites,
 
+log_retention_hours  => $log_retention_hours,
+log_retention_bytes  => $log_retention_bytes,
 auto_leader_rebalance_enable => $auto_leader_rebalance_enable,
 num_replica_fetchers => $num_replica_fetchers,
 message_max_bytes=> $message_max_bytes,

-- 
To view, visit https://gerrit.wikimedia.org/r/404747
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I5a46d2d831f309dfe8f61023875043c1c4d5e6eb
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Ensure samtar and samwalton9 are absent after account expira...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404743 )

Change subject: Ensure samtar and samwalton9 are absent after account expiration
..


Ensure samtar and samwalton9 are absent after account expiration

Bug: T170878
Change-Id: I6267bde03bf1a72e15249b8c0c4a9b141756bcf8
---
M modules/admin/data/data.yaml
1 file changed, 3 insertions(+), 7 deletions(-)

Approvals:
  Muehlenhoff: Looks good to me, but someone else must approve
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/admin/data/data.yaml b/modules/admin/data/data.yaml
index ac2027c..6acca9e 100644
--- a/modules/admin/data/data.yaml
+++ b/modules/admin/data/data.yaml
@@ -123,7 +123,7 @@
   jminor, etonkovidova, sbisson, addshore, matmarex, elukey,
   nikerabbit, dstrine, joewalsh, mpany, jsamra,
   jdittrich, chelsyx, ovasileva, mtizzoni, panisson, paolotti, 
ciro, debt,
-  samwalton9, fdans, samtar, mlitn, shrlak, niharika29, goransm,
+  fdans, mlitn, shrlak, niharika29, goransm,
   pmiazga, dsaez, shiladsen, cicalese, mirrys, sharvaniharan, 
groovier]
   ldap-admins:
 gid: 715
@@ -2371,7 +2371,7 @@
 uid: 15457
 email: mkra...@wikimedia.org
   samwalton9:
-ensure: present
+ensure: absent
 gid: 500
 name: samwalton9
 realname: Sam Walton
@@ -2379,8 +2379,6 @@
   - ssh-rsa 
B3NzaC1yc2EDAQABAAABAQDBSqKkOktF20xShNmJgeOpkhDYXFgcCvNPKbexn67on5M0hPNTKZjptFPCeoQh/i3suAvPDFakDt0pEcCZzzzcwArM21LJ2EFWeqwl6il20L45aD52y8zYYPrTtAi2YaqP77kbSl7/jVW0AFzM6m/G9e5550oeZKDbHGkANpi9uAqn7EjTI88i0txnTEGG6Bwu4G4H/08BsKbkW2C3sB2/h4V1GEHMhxlDEfhlEsVqfaYgrxmXJsTyAjsgawx+fIuqDJsrIFCWlu7IIfur+g0o+DVIDE5kCzZLUeD7FfwP0ym03f7fXF/yjg0sQzXKPF1eXLKXod+7Mn+KOnxLnj+d
 swal...@wikimedia.org
 uid: 15557
 email: swal...@wikimedia.org
-expiry_date: 2018-01-01
-expiry_contact: lego...@wikimedia.org
   volker-e:
 ensure: present
 gid: 500
@@ -2467,7 +2465,7 @@
 uid: 11106
 email: lzie...@wikimedia.org
   samtar:
-ensure: present
+ensure: absent
 gid: 500
 name: samtar
 realname: Sam Tarling
@@ -2475,8 +2473,6 @@
   - ssh-rsa 
B3NzaC1yc2EBJQAAAQEAthC8yN9ImF+F6DQsI4GqYdAKhEtwfZ/+S7xBg2V5Kz5LLrN/KWUN9uiKsUZJfyl2xD12mpu5Mf3nU7c9QoSyZz40Z2GCN/J3IsYLI+6bPFKM7iA65lWHkWcX93JBH0QBlvua9wOAEMMndzeZrloVzJW3PwDa42UikznWYSyoaF60L6eEh+cUs91zZk14GS1gpD+5h+99nzNlBBmgv5aTv53q5JqzdaWXA83X7sZAjQNfLjpT1EDna0z97Agt7DKcKhHbhJqQ67EEZyVH66DtaoNod2cgn0Os74MDX6aNXu69u4pBxGHD7Oh/TY63kY9sB6pHuZ44X5iC2RbhvKSBjw==
 uid: 12744
 email: samtar.on.en...@gmail.com
-expiry_date: 2018-01-01
-expiry_contact: lego...@wikimedia.org
   nithum:
 ensure: present
 gid: 500

-- 
To view, visit https://gerrit.wikimedia.org/r/404743
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I6267bde03bf1a72e15249b8c0c4a9b141756bcf8
Gerrit-PatchSet: 4
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Alex Monk 
Gerrit-Reviewer: Muehlenhoff 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Ensure samtar and samwalton9 are absent after account expira...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404743 )

Change subject: Ensure samtar and samwalton9 are absent after account expiration
..

Ensure samtar and samwalton9 are absent after account expiration

Bug: T170878
Change-Id: I6267bde03bf1a72e15249b8c0c4a9b141756bcf8
---
M modules/admin/data/data.yaml
1 file changed, 3 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/43/404743/1

diff --git a/modules/admin/data/data.yaml b/modules/admin/data/data.yaml
index ac2027c..2e56129 100644
--- a/modules/admin/data/data.yaml
+++ b/modules/admin/data/data.yaml
@@ -123,7 +123,7 @@
   jminor, etonkovidova, sbisson, addshore, matmarex, elukey,
   nikerabbit, dstrine, joewalsh, mpany, jsamra,
   jdittrich, chelsyx, ovasileva, mtizzoni, panisson, paolotti, 
ciro, debt,
-  samwalton9, fdans, samtar, mlitn, shrlak, niharika29, goransm,
+  fdans, mlitn, shrlak, niharika29, goransm,
   pmiazga, dsaez, shiladsen, cicalese, mirrys, sharvaniharan, 
groovier]
   ldap-admins:
 gid: 715
@@ -2371,7 +2371,7 @@
 uid: 15457
 email: mkra...@wikimedia.org
   samwalton9:
-ensure: present
+ensure: absent
 gid: 500
 name: samwalton9
 realname: Sam Walton
@@ -2467,7 +2467,7 @@
 uid: 11106
 email: lzie...@wikimedia.org
   samtar:
-ensure: present
+ensure: absent
 gid: 500
 name: samtar
 realname: Sam Tarling

-- 
To view, visit https://gerrit.wikimedia.org/r/404743
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I6267bde03bf1a72e15249b8c0c4a9b141756bcf8
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: [WIP] Produce webrequests from varnishkafka to jumbo Kafka c...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404737 )

Change subject: [WIP] Produce webrequests from varnishkafka to jumbo Kafka 
cluster via TLS
..

[WIP] Produce webrequests from varnishkafka to jumbo Kafka cluster via TLS

This needs a lot of very careful review and coordination to merge in prod.
For now this exists in gerrit and is cherry-picked in deployment-prep.

Bug: T175461
Change-Id: I1760c36ee26f015617472073e4c5ab95d53d3e44
---
M modules/profile/manifests/cache/kafka/webrequest.pp
1 file changed, 27 insertions(+), 18 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/37/404737/1

diff --git a/modules/profile/manifests/cache/kafka/webrequest.pp 
b/modules/profile/manifests/cache/kafka/webrequest.pp
index 655779b..50321b1 100644
--- a/modules/profile/manifests/cache/kafka/webrequest.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest.pp
@@ -1,34 +1,39 @@
 # === class profile::cache::kafka::webrequest
 #
 # Sets up a varnishkafka instance producing varnish
-# webrequest logs to the analytics Kafka brokers in eqiad.
+# webrequest logs to a Kafka cluster via TLS.
 #
 # === Parameters
 #
-# [*monitoring_enabled*]
-#   True if the varnishkafka instance should be monitored.
-#
 # [*cache_cluster*]
-#   the name of the cache cluster
+#   The name of the cache cluster.
 #
 # [*statsd*]
 #   The host:port to send statsd data to.
 #
+# [*kafka_cluster_name*]
+#   Name of the Kafka cluster in the hiera kafka_clusters hash.  This can
+#   be unqualified (without DC suffix) or fully qualified. Default: jumbo
+#
+# [*monitoring_enabled*]
+#   True if the varnishkafka instance should be monitored.  Default: false
+#
 class profile::cache::kafka::webrequest(
-$monitoring_enabled = 
hiera('profile::cache::kafka::webrequest::monitoring_enabled', false),
 $cache_cluster  = hiera('cache::cluster'),
 $statsd = hiera('statsd'),
+$kafka_cluster_name = 
hiera('profile::cache::kafka::webrequest::kafka_cluster_name', 'jumbo'),
+$monitoring_enabled = 
hiera('profile::cache::kafka::webrequest::monitoring_enabled', false),
 ) {
-$config = kafka_config('analytics')
-# NOTE: This is used by inheriting classes role::cache::kafka::*
-$kafka_brokers = $config['brokers']['array']
+# Include this class to get key and certificate for varnishkafka
+# to produce to Kafka over SSL/TLS.
+require ::profile::cache::kafka::certificate
 
-$topic = "webrequest_${cache_cluster}"
-# These used to be parameters, but I don't really see why given we never 
change
-# them
-$varnish_name   = 'frontend'
-$varnish_svc_name   = 'varnish-frontend'
-$kafka_protocol_version = '0.9.0.1'
+$config = kafka_config($kafka_cluster_name)
+$kafka_brokers = $config['brokers']['ssl_array']
+
+$topic= "webrequest_${cache_cluster}"
+$varnish_name = 'frontend'
+$varnish_svc_name = 'varnish-frontend'
 
 # Background task: T136314
 # Background info about the parameters used:
@@ -88,10 +93,7 @@
 $peak_rps_estimate = 9000
 
 varnishkafka::instance { 'webrequest':
-# FIXME - top-scope var without namespace, will break in puppet 2.8
-# lint:ignore:variable_scope
 brokers  => $kafka_brokers,
-# lint:endignore
 topic=> $topic,
 format_type  => 'json',
 compression_codec=> 'snappy',
@@ -122,6 +124,13 @@
 # stats will be fresh when polled from gmetad.
 log_statistics_interval  => 15,
 force_protocol_version   => $kafka_protocol_version,
+#TLS/SSL config
+ssl_enabled  => true,
+ssl_ca_location  => 
$::profile::cache::kafka::certificate::ssl_ca_location,
+ssl_key_password => 
$::profile::cache::kafka::certificate::ssl_key_password,
+ssl_key_location => 
$::profile::cache::kafka::certificate::ssl_key_location,
+ssl_certificate_location => 
$::profile::cache::kafka::certificate::ssl_certificate_location,
+ssl_cipher_suites=> 
$::profile::cache::kafka::certificate::ssl_cipher_suites,
 }
 
 if $monitoring_enabled {

-- 
To view, visit https://gerrit.wikimedia.org/r/404737
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I1760c36ee26f015617472073e4c5ab95d53d3e44
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] labs/private[master]: Update secrets/certificates with deployment-prep certs for T...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404706 )

Change subject: Update secrets/certificates with deployment-prep certs for TLS 
Kafka
..


Update secrets/certificates with deployment-prep certs for TLS Kafka

Bug: T121561
Change-Id: I93e5325b6a2e78c4a62032a42c4e8f876853708c
---
M modules/secret/secrets/certificates/certificates.manifests.d/README
A 
modules/secret/secrets/certificates/certificates.manifests.d/deployment_prep.certs.yaml
M 
modules/secret/secrets/certificates/certificates.manifests.d/local_ca.certs.yaml
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/ca.crt.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.crt.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.csr.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.key.private.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.key.public.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.keystore.jks
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.keystore.p12
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/truststore.jks
A modules/secret/secrets/certificates/kafka_jumbo-eqiad_broker/README
A modules/secret/secrets/certificates/kafka_test/ca.crt.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.crt.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.csr.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.key.private.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.key.public.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.keystore.jks
A modules/secret/secrets/certificates/kafka_test/kafka_test.keystore.p12
A modules/secret/secrets/certificates/kafka_test/truststore.jks
A modules/secret/secrets/certificates/local_ca/ca.crt.pem
M modules/secret/secrets/certificates/local_ca/local_ca.crt.pem
M modules/secret/secrets/certificates/local_ca/local_ca.csr.pem
M modules/secret/secrets/certificates/local_ca/local_ca.key.private.pem
M modules/secret/secrets/certificates/local_ca/local_ca.key.public.pem
M modules/secret/secrets/certificates/local_ca/local_ca.keystore.jks
M modules/secret/secrets/certificates/local_ca/local_ca.keystore.p12
M modules/secret/secrets/certificates/local_ca/truststore.jks
A modules/secret/secrets/certificates/varnishkafka-deployment-prep/ca.crt.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/truststore.jks
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.crt.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.csr.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.key.private.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.key.public.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.keystore.jks
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.keystore.p12
A modules/secret/secrets/certificates/varnishkafka/README
A modules/secret/secrets/certificates/varnishkafka/ca.crt.pem
38 files changed, 375 insertions(+), 28 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git 
a/modules/secret/secrets/certificates/certificates.manifests.d/README 
b/modules/secret/secrets/certificates/certificates.manifests.d/README
index b27fb4e..214476c 100644
--- a/modules/secret/secrets/certificates/certificates.manifests.d/README
+++ b/modules/secret/secrets/certificates/certificates.manifests.d/README
@@ -4,6 +4,23 @@
 
 To generate these, use the cergen CLI like:
 
-cergen --base-path /srv/private/modules/secret/secrets/certificates --generate
+  cergen --base-path /srv/private/modules/secret/secrets/certificates 
--generate \
   /srv/private/modules/secret/secrets/certificates/certificate.manifests.d
 
+
+deployment-prep certificates are signed by the deployment-prep puppetmaster.
+To generate these, log into the deployment-prep puppetmaster and run:
+
+  KEYTOOL_BIN=/usr/lib/jvm/java-8-openjdk-amd64/bin/keytool cergen --base-path 
/tmp/certificates --generate \
+  
/var/lib/git/labs/private/modules/secret/secrets/certificates/certificate.manifests.d
+
+(NOTE: Java 7's keytool does not work with EC keys, so we set KEYTOOL_BIN to 
Java 8's.
+This is necessary while puppetmaster is still jessie with default JRE as Java 
7.)
+
+Then rsync the /tmp/certificates directory down into your local working 

[MediaWiki-commits] [Gerrit] labs/private[master]: Update secrets/certificates with deployment-prep certs for T...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404706 )

Change subject: Update secrets/certificates with deployment-prep certs for TLS 
Kafka
..

Update secrets/certificates with deployment-prep certs for TLS Kafka

Bug: T121561
Change-Id: I93e5325b6a2e78c4a62032a42c4e8f876853708c
---
M modules/secret/secrets/certificates/certificates.manifests.d/README
A 
modules/secret/secrets/certificates/certificates.manifests.d/deployment_prep.certs.yaml
M 
modules/secret/secrets/certificates/certificates.manifests.d/local_ca.certs.yaml
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/ca.crt.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.crt.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.csr.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.key.private.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.key.public.pem
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.keystore.jks
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/kafka_jumbo-deployment-prep_broker.keystore.p12
A 
modules/secret/secrets/certificates/kafka_jumbo-deployment-prep_broker/truststore.jks
A modules/secret/secrets/certificates/kafka_jumbo-eqiad_broker/README
A modules/secret/secrets/certificates/kafka_test/ca.crt.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.crt.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.csr.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.key.private.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.key.public.pem
A modules/secret/secrets/certificates/kafka_test/kafka_test.keystore.jks
A modules/secret/secrets/certificates/kafka_test/kafka_test.keystore.p12
A modules/secret/secrets/certificates/kafka_test/truststore.jks
A modules/secret/secrets/certificates/local_ca/ca.crt.pem
M modules/secret/secrets/certificates/local_ca/local_ca.crt.pem
M modules/secret/secrets/certificates/local_ca/local_ca.csr.pem
M modules/secret/secrets/certificates/local_ca/local_ca.key.private.pem
M modules/secret/secrets/certificates/local_ca/local_ca.key.public.pem
M modules/secret/secrets/certificates/local_ca/local_ca.keystore.jks
M modules/secret/secrets/certificates/local_ca/local_ca.keystore.p12
M modules/secret/secrets/certificates/local_ca/truststore.jks
A modules/secret/secrets/certificates/varnishkafka-deployment-prep/ca.crt.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/truststore.jks
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.crt.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.csr.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.key.private.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.key.public.pem
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.keystore.jks
A 
modules/secret/secrets/certificates/varnishkafka-deployment-prep/varnishkafka-deployment-prep.keystore.p12
A modules/secret/secrets/certificates/varnishkafka/README
A modules/secret/secrets/certificates/varnishkafka/ca.crt.pem
38 files changed, 375 insertions(+), 28 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/labs/private 
refs/changes/06/404706/1

diff --git 
a/modules/secret/secrets/certificates/certificates.manifests.d/README 
b/modules/secret/secrets/certificates/certificates.manifests.d/README
index b27fb4e..266564a 100644
--- a/modules/secret/secrets/certificates/certificates.manifests.d/README
+++ b/modules/secret/secrets/certificates/certificates.manifests.d/README
@@ -4,6 +4,23 @@
 
 To generate these, use the cergen CLI like:
 
-cergen --base-path /srv/private/modules/secret/secrets/certificates --generate
+  cergen --base-path /srv/private/modules/secret/secrets/certificates 
--generate \
   /srv/private/modules/secret/secrets/certificates/certificate.manifests.d
 
+
+deployment-prep certificates are signed by the deployment-prep puppetmaster.
+To generate these, log into the deployment-prep puppetmaster and run:
+
+  KEYTOOL_BIN=/usr/lib/jvm/java-8-openjdk-amd64/bin/keytool cergen --base-path 
/tmp/certificates --generate \
+  
/var/lib/git/labs/private/modules/secret/secrets/certificates/certificate.manifests.d
+
+(NOTE: Java 7's keytool does not work with EC keys, so we set KEYTOOL_BIN to 
Java 8's.
+This is necessary while puppetmaster is still jessie with default JRE as Java 
7.)
+
+Then rsync the /tmp/certificates directory down into your 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Blacklist gwtoolsetUploadMetadataJob from Hive json refine job

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404701 )

Change subject: Blacklist gwtoolsetUploadMetadataJob from Hive json refine job
..


Blacklist gwtoolsetUploadMetadataJob from Hive json refine job

It has variable types

Change-Id: I66e9497c546e2f44e0f6f683ae104b1a09d4cc69
---
M modules/profile/manifests/analytics/refinery/job/json_refine.pp
1 file changed, 1 insertion(+), 0 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/profile/manifests/analytics/refinery/job/json_refine.pp 
b/modules/profile/manifests/analytics/refinery/job/json_refine.pp
index c71b89c..c3db487 100644
--- a/modules/profile/manifests/analytics/refinery/job/json_refine.pp
+++ b/modules/profile/manifests/analytics/refinery/job/json_refine.pp
@@ -56,6 +56,7 @@
 'PublishStashedFile',
 'CentralAuthCreateLocalAccountJob',
 'gwtoolsetUploadMediafileJob',
+'gwtoolsetUploadMetadataJob',
 ]
 $table_blacklist = sprintf('.*(%s)$', join($problematic_jobs, '|'))
 

-- 
To view, visit https://gerrit.wikimedia.org/r/404701
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I66e9497c546e2f44e0f6f683ae104b1a09d4cc69
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Blacklist gwtoolsetUploadMetadataJob from Hive json refine job

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404701 )

Change subject: Blacklist gwtoolsetUploadMetadataJob from Hive json refine job
..

Blacklist gwtoolsetUploadMetadataJob from Hive json refine job

It has variable types

Change-Id: I66e9497c546e2f44e0f6f683ae104b1a09d4cc69
---
M modules/profile/manifests/analytics/refinery/job/json_refine.pp
1 file changed, 1 insertion(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/01/404701/1

diff --git a/modules/profile/manifests/analytics/refinery/job/json_refine.pp 
b/modules/profile/manifests/analytics/refinery/job/json_refine.pp
index c71b89c..c3db487 100644
--- a/modules/profile/manifests/analytics/refinery/job/json_refine.pp
+++ b/modules/profile/manifests/analytics/refinery/job/json_refine.pp
@@ -56,6 +56,7 @@
 'PublishStashedFile',
 'CentralAuthCreateLocalAccountJob',
 'gwtoolsetUploadMediafileJob',
+'gwtoolsetUploadMetadataJob',
 ]
 $table_blacklist = sprintf('.*(%s)$', join($problematic_jobs, '|'))
 

-- 
To view, visit https://gerrit.wikimedia.org/r/404701
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I66e9497c546e2f44e0f6f683ae104b1a09d4cc69
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Parameterize varnishkafka certificate name for easier setup ...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404698 )

Change subject: Parameterize varnishkafka certificate name for easier setup in 
Cloud VPS.
..


Parameterize varnishkafka certificate name for easier setup in Cloud VPS.

I want to set up Kafka TLS and varnishkafka in deployment-prep.
This should be a no-op in prod (currently only on canary).

Bug: T121561
Change-Id: If3bc94f0591b138578191f78ed784a3e632af712
---
M modules/profile/manifests/cache/kafka/certificate.pp
1 file changed, 38 insertions(+), 5 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/cache/kafka/certificate.pp 
b/modules/profile/manifests/cache/kafka/certificate.pp
index 14505b9..52cf1ff 100644
--- a/modules/profile/manifests/cache/kafka/certificate.pp
+++ b/modules/profile/manifests/cache/kafka/certificate.pp
@@ -3,22 +3,40 @@
 # This expects that a 'varnishkafka' SSL/TLS key and certificate is created by 
Cergen and
 # signed by our PuppetCA, and available in the Puppet private secrets module.
 # == Parameters.
+#
 # [*ssl_key_password*]
 #   The password to decrypt the TLS client certificate.  Default: undef
 #
+# [*certificate_name*]
+#   Name of certificate (cergen) in the secrets module.  This will be used
+#   To find the certificate file secret() puppet paths.
+#
+# [*certificate_name*]
+#   Name of certificate (cergen) in the secrets module.  This will be used
+#   To find the certificate file secret() puppet paths.  You might want to
+#   change this if you are testing in Cloud VPS.  Default: varnishkafka.
+#
+# [*use_puppet_internal_ca*]
+#   If true, the CA cert.pem file will be assumed to be already installed at
+#   /etc/ssl/certs/Puppet_Internal_CA.pem, and will be used as the 
ssl.ca.location
+#   for varnishkafka/librdkafka.  Default: true.  Set this to false if the
+#   certificate name you set is not signed by the Puppet CA, and the
+#   cergen created ca.crt.pem file will be used.
+#
 class profile::cache::kafka::certificate(
 $ssl_key_password  = 
hiera('profile::cache::kafka::certificate::ssl_key_password', undef),
+$certificate_name = 
hiera('profile::cache::kafka::certificate::certificate_name', 'varnishkafka'),
+$use_puppet_internal_ca = 
hiera('profile::cache::kafka::certificate::use_puppet_internal_ca', true),
 ) {
 # TLS/SSL configuration
-$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
 $ssl_location = '/etc/varnishkafka/ssl'
 $ssl_location_private = '/etc/varnishkafka/ssl/private'
 
-$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
-$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
+$ssl_key_location_secrets_path = 
"certificates/${certificate_name}/${certificate_name}.key.private.pem"
+$ssl_key_location = "${ssl_location_private}/${certificate_name}.key.pem"
 
-$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
-$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
+$ssl_certificate_secrets_path = 
"certificates/${certificate_name}/${certificate_name}.crt.pem"
+$ssl_certificate_location = "${ssl_location}/${certificate_name}.crt.pem"
 $ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
 
 file { $ssl_location:
@@ -50,4 +68,19 @@
 group   => 'root',
 mode=> '0444',
 }
+
+if $use_puppet_internal_ca {
+$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
+}
+else {
+$ssl_ca_location_secrets_path = 
"certificates/${certificate_name}/ca.crt.pem"
+$ssl_ca_location = "${ssl_location}/ca.crt.pem"
+
+file { $ssl_ca_location:
+content => secret($ssl_ca_location_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0444',
+}
+}
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/404698
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: If3bc94f0591b138578191f78ed784a3e632af712
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Parameterize varnishkafka certificate name for easier setup ...

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404698 )

Change subject: Parameterize varnishkafka certificate name for easier setup in 
Cloud VPS.
..

Parameterize varnishkafka certificate name for easier setup in Cloud VPS.

I want to set up Kafka TLS and varnishkafka in deployment-prep.
This should be a no-op in prod (currently only on canary).

Bug: T121561
Change-Id: If3bc94f0591b138578191f78ed784a3e632af712
---
M modules/profile/manifests/cache/kafka/certificate.pp
1 file changed, 38 insertions(+), 5 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/98/404698/1

diff --git a/modules/profile/manifests/cache/kafka/certificate.pp 
b/modules/profile/manifests/cache/kafka/certificate.pp
index 14505b9..52cf1ff 100644
--- a/modules/profile/manifests/cache/kafka/certificate.pp
+++ b/modules/profile/manifests/cache/kafka/certificate.pp
@@ -3,22 +3,40 @@
 # This expects that a 'varnishkafka' SSL/TLS key and certificate is created by 
Cergen and
 # signed by our PuppetCA, and available in the Puppet private secrets module.
 # == Parameters.
+#
 # [*ssl_key_password*]
 #   The password to decrypt the TLS client certificate.  Default: undef
 #
+# [*certificate_name*]
+#   Name of certificate (cergen) in the secrets module.  This will be used
+#   To find the certificate file secret() puppet paths.
+#
+# [*certificate_name*]
+#   Name of certificate (cergen) in the secrets module.  This will be used
+#   To find the certificate file secret() puppet paths.  You might want to
+#   change this if you are testing in Cloud VPS.  Default: varnishkafka.
+#
+# [*use_puppet_internal_ca*]
+#   If true, the CA cert.pem file will be assumed to be already installed at
+#   /etc/ssl/certs/Puppet_Internal_CA.pem, and will be used as the 
ssl.ca.location
+#   for varnishkafka/librdkafka.  Default: true.  Set this to false if the
+#   certificate name you set is not signed by the Puppet CA, and the
+#   cergen created ca.crt.pem file will be used.
+#
 class profile::cache::kafka::certificate(
 $ssl_key_password  = 
hiera('profile::cache::kafka::certificate::ssl_key_password', undef),
+$certificate_name = 
hiera('profile::cache::kafka::certificate::certificate_name', 'varnishkafka'),
+$use_puppet_internal_ca = 
hiera('profile::cache::kafka::certificate::use_puppet_internal_ca', true),
 ) {
 # TLS/SSL configuration
-$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
 $ssl_location = '/etc/varnishkafka/ssl'
 $ssl_location_private = '/etc/varnishkafka/ssl/private'
 
-$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
-$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
+$ssl_key_location_secrets_path = 
"certificates/${certificate_name}/${certificate_name}.key.private.pem"
+$ssl_key_location = "${ssl_location_private}/${certificate_name}.key.pem"
 
-$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
-$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
+$ssl_certificate_secrets_path = 
"certificates/${certificate_name}/${certificate_name}.crt.pem"
+$ssl_certificate_location = "${ssl_location}/${certificate_name}.crt.pem"
 $ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
 
 file { $ssl_location:
@@ -50,4 +68,19 @@
 group   => 'root',
 mode=> '0444',
 }
+
+if $use_puppet_internal_ca {
+$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
+}
+else {
+$ssl_ca_location_secrets_path = 
"certificates/${certificate_name}/ca.crt.pem"
+$ssl_ca_location = "${ssl_location}/ca.crt.pem"
+
+file { $ssl_ca_location:
+content => secret($ssl_ca_location_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0444',
+}
+}
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/404698
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: If3bc94f0591b138578191f78ed784a3e632af712
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] cergen[master]: Generate ca.crt.pem files in each certificate directory

2018-01-17 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/404687 )

Change subject: Generate ca.crt.pem files in each certificate directory
..


Generate ca.crt.pem files in each certificate directory

This makes it easier to distribute CA certificate files.

Change-Id: I09c1dff11eea8d4cd44aa9f574b386245ec38fb1
---
M CHANGELOG.md
M cergen/certificate.py
M setup.py
M tests/test_certificate.py
4 files changed, 40 insertions(+), 2 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/CHANGELOG.md b/CHANGELOG.md
index 675b7dc..0153f66 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,6 @@
+# 0.2.1
+- Now also generate ca.crt.pem files in each certificate directory.
+
 # 0.2.0
 - puppet-sign-cert now only works with Puppet 4.
 
diff --git a/cergen/certificate.py b/cergen/certificate.py
index 8bb7436..903aff3 100644
--- a/cergen/certificate.py
+++ b/cergen/certificate.py
@@ -157,8 +157,10 @@
 
 # Certificate Signing Request file in .pem format.
 self.csr_file = os.path.join(self.path, '{}.csr.pem'.format(self.name))
-# Public Signed Certificate file in .pem format
+# x509 Certificate file in .pem format
 self.crt_file = os.path.join(self.path, '{}.crt.pem'.format(self.name))
+# Authority's x509 Certificate file in .pem format
+self.ca_crt_file = os.path.join(self.path, 'ca.crt.pem')
 # PKCS#12 'keystore' file
 self.p12_file = os.path.join(self.path, 
'{}.keystore.p12'.format(self.name))
 # Java Keystore file
@@ -263,6 +265,7 @@
 self.key.generate(force=force)
 self.generate_crt(force=force)
 # TODO: maybe rename these subordinate generate methods?
+self.generate_ca_crt(force=force)
 self.generate_p12(force=force)
 self.generate_keystore(force=force)
 self.generate_truststore(force=force)
@@ -358,6 +361,35 @@
 f.write(csr.public_bytes(serialization.Encoding.PEM))
 
 return csr
+
+def generate_ca_crt(self, force=False):
+"""
+Copies the authority's certificate in .pem format
+into this certificate's path under the name 'ca.crt.pem'.
+This is useful so the CA certificate can be easily distributed.
+
+Args:
+force (bool, optional)
+
+Raises:
+RuntimeError: if a a new certificate cannot be signed by the 
authority
+or verified by the authority chain.
+
+"""
+if not self.should_generate(self.ca_crt_file, force):
+return False
+
+self.log.info('Generating CA certificate file')
+
+# The authority has a local cert_file.  Copy it to this Certificate's 
path.
+shutil.copyfile(self.authority.cert_file, self.ca_crt_file)
+
+# Verify that crt_file was created.
+if not os.path.exists(self.ca_crt_file):
+raise RuntimeError(
+'{} does not exist even though we copied it from {}. '
+' This should not happen.'.format(self.ca_crt_file, 
self.authority.cert_file)
+)
 
 def generate_p12(self, force=False):
 """
@@ -522,6 +554,7 @@
 self.key.private_key_file,
 self.key.public_key_file,
 self.crt_file,
+self.ca_crt_file,
 self.p12_file,
 self.jks_file,
 self.truststore_jks_file
diff --git a/setup.py b/setup.py
index 4006228..ca1a1b0 100644
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
 
 setup(
 name='cergen',
-version='0.1.1',
+version='0.2.1',
 description='Automated x509 certificate generation and management',
 license='Apache',
 author='Andrew Otto',
diff --git a/tests/test_certificate.py b/tests/test_certificate.py
index 57d0a4f..02d2bd8 100644
--- a/tests/test_certificate.py
+++ b/tests/test_certificate.py
@@ -37,6 +37,8 @@
 'crt_file should exist'
 assert os.path.exists(certificate.csr_file), \
 'csr_file should exist'
+assert os.path.exists(certificate.ca_crt_file), \
+'ca_crt_file should exist'
 assert os.path.exists(certificate.p12_file), \
 'p12_file should exist'
 assert os.path.exists(certificate.jks_file), \

-- 
To view, visit https://gerrit.wikimedia.org/r/404687
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I09c1dff11eea8d4cd44aa9f574b386245ec38fb1
Gerrit-PatchSet: 1
Gerrit-Project: cergen
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] cergen[master]: Generate ca.crt.pem files in each certificate directory

2018-01-17 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404687 )

Change subject: Generate ca.crt.pem files in each certificate directory
..

Generate ca.crt.pem files in each certificate directory

This makes it easier to distribute CA certificate files.

Change-Id: I09c1dff11eea8d4cd44aa9f574b386245ec38fb1
---
M CHANGELOG.md
M cergen/certificate.py
M setup.py
M tests/test_certificate.py
4 files changed, 40 insertions(+), 2 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/cergen refs/changes/87/404687/1

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 675b7dc..0153f66 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,6 @@
+# 0.2.1
+- Now also generate ca.crt.pem files in each certificate directory.
+
 # 0.2.0
 - puppet-sign-cert now only works with Puppet 4.
 
diff --git a/cergen/certificate.py b/cergen/certificate.py
index 8bb7436..903aff3 100644
--- a/cergen/certificate.py
+++ b/cergen/certificate.py
@@ -157,8 +157,10 @@
 
 # Certificate Signing Request file in .pem format.
 self.csr_file = os.path.join(self.path, '{}.csr.pem'.format(self.name))
-# Public Signed Certificate file in .pem format
+# x509 Certificate file in .pem format
 self.crt_file = os.path.join(self.path, '{}.crt.pem'.format(self.name))
+# Authority's x509 Certificate file in .pem format
+self.ca_crt_file = os.path.join(self.path, 'ca.crt.pem')
 # PKCS#12 'keystore' file
 self.p12_file = os.path.join(self.path, 
'{}.keystore.p12'.format(self.name))
 # Java Keystore file
@@ -263,6 +265,7 @@
 self.key.generate(force=force)
 self.generate_crt(force=force)
 # TODO: maybe rename these subordinate generate methods?
+self.generate_ca_crt(force=force)
 self.generate_p12(force=force)
 self.generate_keystore(force=force)
 self.generate_truststore(force=force)
@@ -358,6 +361,35 @@
 f.write(csr.public_bytes(serialization.Encoding.PEM))
 
 return csr
+
+def generate_ca_crt(self, force=False):
+"""
+Copies the authority's certificate in .pem format
+into this certificate's path under the name 'ca.crt.pem'.
+This is useful so the CA certificate can be easily distributed.
+
+Args:
+force (bool, optional)
+
+Raises:
+RuntimeError: if a a new certificate cannot be signed by the 
authority
+or verified by the authority chain.
+
+"""
+if not self.should_generate(self.ca_crt_file, force):
+return False
+
+self.log.info('Generating CA certificate file')
+
+# The authority has a local cert_file.  Copy it to this Certificate's 
path.
+shutil.copyfile(self.authority.cert_file, self.ca_crt_file)
+
+# Verify that crt_file was created.
+if not os.path.exists(self.ca_crt_file):
+raise RuntimeError(
+'{} does not exist even though we copied it from {}. '
+' This should not happen.'.format(self.ca_crt_file, 
self.authority.cert_file)
+)
 
 def generate_p12(self, force=False):
 """
@@ -522,6 +554,7 @@
 self.key.private_key_file,
 self.key.public_key_file,
 self.crt_file,
+self.ca_crt_file,
 self.p12_file,
 self.jks_file,
 self.truststore_jks_file
diff --git a/setup.py b/setup.py
index 4006228..ca1a1b0 100644
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
 
 setup(
 name='cergen',
-version='0.1.1',
+version='0.2.1',
 description='Automated x509 certificate generation and management',
 license='Apache',
 author='Andrew Otto',
diff --git a/tests/test_certificate.py b/tests/test_certificate.py
index 57d0a4f..02d2bd8 100644
--- a/tests/test_certificate.py
+++ b/tests/test_certificate.py
@@ -37,6 +37,8 @@
 'crt_file should exist'
 assert os.path.exists(certificate.csr_file), \
 'csr_file should exist'
+assert os.path.exists(certificate.ca_crt_file), \
+'ca_crt_file should exist'
 assert os.path.exists(certificate.p12_file), \
 'p12_file should exist'
 assert os.path.exists(certificate.jks_file), \

-- 
To view, visit https://gerrit.wikimedia.org/r/404687
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I09c1dff11eea8d4cd44aa9f574b386245ec38fb1
Gerrit-PatchSet: 1
Gerrit-Project: cergen
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Ensure specific librdkafka version for changeprop and events...

2018-01-16 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/404540 )

Change subject: Ensure specific librdkafka version for changeprop and 
eventstreams
..

Ensure specific librdkafka version for changeprop and eventstreams

Bug: T176126
Bug: T185016
Change-Id: I8dac12a8b8d49da97c7dc7dced47dd4f85fde8d5
---
M modules/profile/manifests/changeprop/packages.pp
M modules/profile/manifests/eventstreams.pp
2 files changed, 52 insertions(+), 5 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/40/404540/1

diff --git a/modules/profile/manifests/changeprop/packages.pp 
b/modules/profile/manifests/changeprop/packages.pp
index 1df50de..96000f1 100644
--- a/modules/profile/manifests/changeprop/packages.pp
+++ b/modules/profile/manifests/changeprop/packages.pp
@@ -1,9 +1,33 @@
 # Packages required by changeprop and cpjobqueue
 class profile::changeprop::packages() {
+require ::service::configuration
 
-service::packages { 'changeprop':
-pkgs => ['librdkafka++1', 'librdkafka1'],
-dev_pkgs => ['librdkafka-dev'],
+$librdkafka_version = $::lsbdistcodename ? {
+'jessie'  => '0.9.4-1~jessie1',
+'stretch' => '0.9.3-1',
 }
 
+# We are only installing librdkafka packages here, so make all
+# in scope package resources ensure the version.
+# See: https://phabricator.wikimedia.org/T185016
+Package {
+ensure => $librdkafka_version
+}
+# Need to use package resource directly, so we can ensure version.
+if !defined(Package['librdkafka1']) {
+package { 'librdkafka1': }
+}
+if !defined(Package['librdkafka++1']) {
+package { 'librdkafka++1': }
+}
+if $::service::configuration::use_dev_pkgs and 
!defined(Package['librdkafka-dev']) {
+package { 'librdkafka-dev': }
+}
+
+# TODO: restore use of service::packages when we no longer need to
+# ensure a specific librdkafka version.
+# service::packages { 'changeprop':
+# pkgs => ['librdkafka++1', 'librdkafka1'],
+# dev_pkgs => ['librdkafka-dev'],
+# }
 }
diff --git a/modules/profile/manifests/eventstreams.pp 
b/modules/profile/manifests/eventstreams.pp
index d340984..9fbd82d 100644
--- a/modules/profile/manifests/eventstreams.pp
+++ b/modules/profile/manifests/eventstreams.pp
@@ -39,10 +39,33 @@
 ) {
 $kafka_config = kafka_config($kafka_cluster_name)
 $broker_list = $kafka_config['brokers']['string']
-service::packages { 'eventstreams':
-pkgs => ['librdkafka++1', 'librdkafka1'],
+
+
+$librdkafka_version = $::lsbdistcodename ? {
+'jessie'  => '0.9.4-1~jessie1',
+'stretch' => '0.9.3-1',
 }
 
+# We are only installing librdkafka packages here, so make all
+# in scope package resources ensure the version.
+# See: https://phabricator.wikimedia.org/T185016
+Package {
+ensure => $librdkafka_version
+}
+# Need to use package resource directly, so we can ensure version.
+if !defined(Package['librdkafka1']) {
+package { 'librdkafka1': }
+}
+if !defined(Package['librdkafka++1']) {
+package { 'librdkafka++1': }
+}
+
+# TODO: restore use of service::packages when we no longer need to
+# ensure a specific librdkafka version.
+# service::packages { 'eventstreams':
+# pkgs => ['librdkafka++1', 'librdkafka1'],
+# }
+
 service::node { 'eventstreams':
 enable=> true,
 port  => 8092,

-- 
To view, visit https://gerrit.wikimedia.org/r/404540
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I8dac12a8b8d49da97c7dc7dced47dd4f85fde8d5
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add $monitoring_enabled parameter to cache::kafka::webreques...

2018-01-16 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403185 )

Change subject: Add $monitoring_enabled parameter to cache::kafka::webrequest 
profile
..


Add $monitoring_enabled parameter to cache::kafka::webrequest profile

Also rename $statsd_host to $statsd to match other profiles.

This should be a no-op.

The cache::kafka::webrequest profile is included in cache::base profile,
which is in turn included by the cache role classes.  As such, we set
this parameter in each cache role hiera.

Change-Id: I86dc34d21bc990ddccc94d5ab43a1763c6ada6d0
---
M hieradata/role/common/cache/canary.yaml
M hieradata/role/common/cache/misc.yaml
M hieradata/role/common/cache/text.yaml
M hieradata/role/common/cache/upload.yaml
M modules/profile/manifests/cache/kafka/webrequest.pp
5 files changed, 60 insertions(+), 34 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/hieradata/role/common/cache/canary.yaml 
b/hieradata/role/common/cache/canary.yaml
index 40bb4c2..c943922 100644
--- a/hieradata/role/common/cache/canary.yaml
+++ b/hieradata/role/common/cache/canary.yaml
@@ -94,4 +94,14 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
+
+# This should match an entry in the kafka_clusters hash (defined in 
common.yaml).
+# We use the fully qualified kafka cluster name (with datacenter), because we 
want
+# to route all statsv -> statsd traffic to the datacenter that hosts the master
+# statsd instance.  If the active statsd instance changes to codfw (for an 
extended period of time)
+# should probably change this to main-codfw.  If you don't things will 
probably be fine,
+# but statsv will have to send messages over UDP cross-DC to the active statsd 
instance.
 profile::cache::kafka::statsv::kafka_cluster_name: main-eqiad
diff --git a/hieradata/role/common/cache/misc.yaml 
b/hieradata/role/common/cache/misc.yaml
index 3f552b5..47b242b 100644
--- a/hieradata/role/common/cache/misc.yaml
+++ b/hieradata/role/common/cache/misc.yaml
@@ -305,3 +305,6 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
diff --git a/hieradata/role/common/cache/text.yaml 
b/hieradata/role/common/cache/text.yaml
index 40e5c5d..a319c17 100644
--- a/hieradata/role/common/cache/text.yaml
+++ b/hieradata/role/common/cache/text.yaml
@@ -100,6 +100,9 @@
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
 
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
+
 # This should match an entry in the kafka_clusters hash (defined in 
common.yaml).
 # We use the fully qualified kafka cluster name (with datacenter), because we 
want
 # to route all statsv -> statsd traffic to the datacenter that hosts the master
diff --git a/hieradata/role/common/cache/upload.yaml 
b/hieradata/role/common/cache/upload.yaml
index b5c97ec..5f32a80 100644
--- a/hieradata/role/common/cache/upload.yaml
+++ b/hieradata/role/common/cache/upload.yaml
@@ -71,3 +71,6 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
diff --git a/modules/profile/manifests/cache/kafka/webrequest.pp 
b/modules/profile/manifests/cache/kafka/webrequest.pp
index 6c4a17c..655779b 100644
--- a/modules/profile/manifests/cache/kafka/webrequest.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest.pp
@@ -5,15 +5,19 @@
 #
 # === Parameters
 #
+# [*monitoring_enabled*]
+#   True if the varnishkafka instance should be monitored.
+#
 # [*cache_cluster*]
 #   the name of the cache cluster
 #
-# [*statsd_host*]
-#   the host to send statsd data to.
+# [*statsd*]
+#   The host:port to send statsd data to.
 #
 class profile::cache::kafka::webrequest(
-$cache_cluster = hiera('cache::cluster'),
-$statsd_host = hiera('statsd'),
+$monitoring_enabled = 
hiera('profile::cache::kafka::webrequest::monitoring_enabled', false),
+$cache_cluster  = hiera('cache::cluster'),
+$statsd = hiera('statsd'),
 ) {
 $config = kafka_config('analytics')
 # NOTE: This is used by inheriting classes role::cache::kafka::*
@@ -120,38 +124,41 @@
 force_protocol_version   => $kafka_protocol_version,
 }
 
-# Generate icinga alert if varnishkafka is not running.
-nrpe::monitor_service { 'varnishkafka-webrequest':
-description   => 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Also disable SHA224

2018-01-11 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403774 )

Change subject: Also disable SHA224
..


Also disable SHA224

Bug: T182993
Change-Id: Ia39e54128a102a4b6ad1cf35e7cc0d89f94ab668
---
M modules/profile/files/kafka/java.security
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index 3d4d5f1..68c6864 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -548,7 +548,7 @@
 jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
  RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, DSA, RSA keySize < 2048, EC 
keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, SHA224, DSA, RSA keySize < 
2048, EC keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403774
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia39e54128a102a4b6ad1cf35e7cc0d89f94ab668
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: BBlack 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Also disable SHA224

2018-01-11 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403774 )

Change subject: Also disable SHA224
..

Also disable SHA224

Bug: T182993
Change-Id: Ia39e54128a102a4b6ad1cf35e7cc0d89f94ab668
---
M modules/profile/files/kafka/java.security
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/74/403774/1

diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index 3d4d5f1..68c6864 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -548,7 +548,7 @@
 jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
  RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, DSA, RSA keySize < 2048, EC 
keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, SHA224, DSA, RSA keySize < 
2048, EC keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403774
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia39e54128a102a4b6ad1cf35e7cc0d89f94ab668
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Allow certificates RSA keySize > 2048, Puppet generates cert...

2018-01-11 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403762 )

Change subject: Allow certificates RSA keySize > 2048, Puppet generates certs 
like these
..

Allow certificates RSA keySize > 2048, Puppet generates certs like these

Bug: T182993
Change-Id: I65d89cdfa74d2b39eeb9ce3f85f87785f99f555c
---
M modules/profile/files/kafka/java.security
1 file changed, 1 insertion(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/62/403762/1

diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index aa9e114..3d4d5f1 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -548,9 +548,7 @@
 jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
  RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-# TODO: Temporiarly disable this.  It is not working with Puppet signed 
certificates.
-# Not sure why yet.
-#jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, DSA, RSA keySize < 2048, EC 
keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403762
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I65d89cdfa74d2b39eeb9ce3f85f87785f99f555c
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Allow certificates RSA keySize > 2048, Puppet generates cert...

2018-01-11 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403762 )

Change subject: Allow certificates RSA keySize > 2048, Puppet generates certs 
like these
..


Allow certificates RSA keySize > 2048, Puppet generates certs like these

Bug: T182993
Change-Id: I65d89cdfa74d2b39eeb9ce3f85f87785f99f555c
---
M modules/profile/files/kafka/java.security
1 file changed, 1 insertion(+), 3 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index aa9e114..3d4d5f1 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -548,9 +548,7 @@
 jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
  RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-# TODO: Temporiarly disable this.  It is not working with Puppet signed 
certificates.
-# Not sure why yet.
-#jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, DSA, RSA keySize < 2048, EC 
keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403762
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I65d89cdfa74d2b39eeb9ce3f85f87785f99f555c
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: BBlack 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Revert yesterday's change to kafka-jumbo java.security

2018-01-11 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403753 )

Change subject: Revert yesterday's change to kafka-jumbo java.security
..


Revert yesterday's change to kafka-jumbo java.security

The certpath restrictions don't work with Puppet signed certs.
Not sure why yet.

Bug: T182993
Change-Id: Ibd5e969a09dd34826d5bd7f3de2a9ad2b0f0c8b8
---
M modules/profile/files/kafka/java.security
1 file changed, 5 insertions(+), 3 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index e93174d..aa9e114 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -545,10 +545,12 @@
 #
 # NOTE: The disabledAlgorithms has been modified for use with WMF Kafka 
brokers.
 #   See: https://phabricator.wikimedia.org/T182993
-# jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
-# RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
+ RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
+# TODO: Temporiarly disable this.  It is not working with Puppet signed 
certificates.
+# Not sure why yet.
+#jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403753
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ibd5e969a09dd34826d5bd7f3de2a9ad2b0f0c8b8
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Revert yesterday's change to kafka-jumbo java.security

2018-01-11 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403753 )

Change subject: Revert yesterday's change to kafka-jumbo java.security
..

Revert yesterday's change to kafka-jumbo java.security

The certpath restrictions don't work with Puppet signed certs.
Not sure why yet.

Bug: T182993
Change-Id: Ibd5e969a09dd34826d5bd7f3de2a9ad2b0f0c8b8
---
M modules/profile/files/kafka/java.security
1 file changed, 5 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/53/403753/1

diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
index e93174d..aa9e114 100644
--- a/modules/profile/files/kafka/java.security
+++ b/modules/profile/files/kafka/java.security
@@ -545,10 +545,12 @@
 #
 # NOTE: The disabledAlgorithms has been modified for use with WMF Kafka 
brokers.
 #   See: https://phabricator.wikimedia.org/T182993
-# jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
-# RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
+jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1 jdkCA & usage TLSServer, \
+ RSA keySize < 1024, DSA keySize < 1024, EC keySize < 224
 
-jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
+# TODO: Temporiarly disable this.  It is not working with Puppet signed 
certificates.
+# Not sure why yet.
+#jdk.certpath.disabledAlgorithms=MD2, MD5, SHA1, RSA, DSA, EC keySize < 224
 
 #
 # Algorithm restrictions for signed JAR files

-- 
To view, visit https://gerrit.wikimedia.org/r/403753
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ibd5e969a09dd34826d5bd7f3de2a9ad2b0f0c8b8
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] mediawiki...eventstreams[master]: Fixes for updated linter

2018-01-10 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403461 )

Change subject: Fixes for updated linter
..


Fixes for updated linter

Bug: T171011
Change-Id: I68bc0200d51b4a82f36f4116cd7ef06c45a48a43
---
M lib/eventstreams-util.js
M routes/stream.js
2 files changed, 2 insertions(+), 3 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/lib/eventstreams-util.js b/lib/eventstreams-util.js
index 71e3535..0b4a3bc 100644
--- a/lib/eventstreams-util.js
+++ b/lib/eventstreams-util.js
@@ -128,14 +128,14 @@
  */
 class IntervalCounter {
 /**
- * @param {function} cb cb function that takes key, value.
+ * @param {callback} cb cb function that takes key, value.
  *  This will be called for each stored 
counter
  * @param {integer}  intervalMs cb will be called for every key, count 
this often.
  *  Default: 5000
  * @param {boolean}  shouldResetIf true, each stored counter will be 
nulled every
  *  interval ms.  Default: false
  *
- * @constructor
+ * @class
  */
 constructor(cb, intervalMs, shouldReset) {
 this.cb  = cb;
diff --git a/routes/stream.js b/routes/stream.js
index a7c53fb..b48b701 100644
--- a/routes/stream.js
+++ b/routes/stream.js
@@ -2,7 +2,6 @@
 
 const os = require('os');
 const kafkaSse = require('kafka-sse');
-const rdkafkaStatsd = require('node-rdkafka-statsd');
 
 const sUtil = require('../lib/util');
 const eUtil = require('../lib/eventstreams-util');

-- 
To view, visit https://gerrit.wikimedia.org/r/403461
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I68bc0200d51b4a82f36f4116cd7ef06c45a48a43
Gerrit-PatchSet: 1
Gerrit-Project: mediawiki/services/eventstreams
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] mediawiki...eventstreams[master]: Squash merge service-template-node v0.5.4 and fix conflicts

2018-01-10 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403460 )

Change subject: Squash merge service-template-node v0.5.4 and fix conflicts
..


Squash merge service-template-node v0.5.4 and fix conflicts

Bug: T171011

Conflicts:
.travis.yml
app.js
lib/api-util.js
lib/swagger-ui.js
lib/util.js
package.json
routes/ex.js
routes/root.js
routes/v1.js
test/features/app/app.js
test/features/app/spec.js
test/features/ex/errors.js
test/features/v1/page.js
test/features/v1/siteinfo.js

Change-Id: Ib931ccff5b4175fd8752c5b92e98860ffc074ea7
---
A .nsprc
M .travis.yml
M app.js
M lib/swagger-ui.js
M lib/util.js
M package.json
M server.js
M test/features/app/app.js
M test/features/app/spec.js
M test/features/info/info.js
M test/utils/assert.js
M test/utils/logStream.js
M test/utils/server.js
13 files changed, 380 insertions(+), 346 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/.nsprc b/.nsprc
new file mode 100644
index 000..98b2ef4
--- /dev/null
+++ b/.nsprc
@@ -0,0 +1,5 @@
+{
+  "exceptions": [
+"https://nodesecurity.io/advisories/532;
+  ]
+}
diff --git a/.travis.yml b/.travis.yml
index df202fa..b6a9e5d 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -5,3 +5,5 @@
 node_js:
   - "4"
   - "6"
+  - "8"
+  - "node"
diff --git a/app.js b/app.js
index 21392ca..582d6bf 100644
--- a/app.js
+++ b/app.js
@@ -1,17 +1,16 @@
 'use strict';
 
-
-require('core-js/shim');
-
-var http = require('http');
-var BBPromise = require('bluebird');
-var express = require('express');
-var bodyParser = require('body-parser');
-var fs = BBPromise.promisifyAll(require('fs'));
-var sUtil = require('./lib/util');
-var packageInfo = require('./package.json');
-var yaml = require('js-yaml');
-var SwaggerParser = require('swagger-parser');
+const http = require('http');
+const BBPromise = require('bluebird');
+const express = require('express');
+const compression = require('compression');
+const bodyParser = require('body-parser');
+const fs = BBPromise.promisifyAll(require('fs'));
+const sUtil = require('./lib/util');
+const packageInfo = require('./package.json');
+const yaml = require('js-yaml');
+const addShutdown = require('http-shutdown');
+const SwaggerParser = require('swagger-parser');
 
 /**
  * Creates an express app and initialises it
@@ -21,7 +20,7 @@
 function initApp(options) {
 
 // the main application object
-var app = express();
+const app = express();
 
 // get the options and make them available in the app
 app.logger = options.logger;// the logging device
@@ -30,22 +29,22 @@
 app.info = packageInfo; // this app's package info
 
 // ensure some sane defaults
-if(!app.conf.port) { app.conf.port = ; }
-if(!app.conf.interface) { app.conf.interface = '0.0.0.0'; }
-if(app.conf.compression_level === undefined) { app.conf.compression_level 
= 3; }
-if(app.conf.cors === undefined) { app.conf.cors = '*'; }
-if(app.conf.csp === undefined) {
-app.conf.csp =
-"default-src 'self'; object-src 'none'; media-src *; img-src *; 
style-src *; frame-ancestors 'self'";
+if (!app.conf.port) { app.conf.port = ; }
+if (!app.conf.interface) { app.conf.interface = '0.0.0.0'; }
+if (app.conf.compression_level === undefined) { app.conf.compression_level 
= 3; }
+if (app.conf.cors === undefined) { app.conf.cors = '*'; }
+if (app.conf.csp === undefined) {
+// eslint-disable-next-line max-len
+app.conf.csp = "default-src 'self'; object-src 'none'; media-src *; 
img-src *; style-src *; frame-ancestors 'self'";
 }
 
 // set outgoing proxy
-if(app.conf.proxy) {
+if (app.conf.proxy) {
 process.env.HTTP_PROXY = app.conf.proxy;
 // if there is a list of domains which should
 // not be proxied, set it
-if(app.conf.no_proxy_list) {
-if(Array.isArray(app.conf.no_proxy_list)) {
+if (app.conf.no_proxy_list) {
+if (Array.isArray(app.conf.no_proxy_list)) {
 process.env.NO_PROXY = app.conf.no_proxy_list.join(',');
 } else {
 process.env.NO_PROXY = app.conf.no_proxy_list;
@@ -54,32 +53,32 @@
 }
 
 // set up header whitelisting for logging
-if(!app.conf.log_header_whitelist) {
+if (!app.conf.log_header_whitelist) {
 app.conf.log_header_whitelist = [
-'cache-control', 'content-type', 'content-length', 'if-match',
-'user-agent', 'x-request-id'
+'cache-control', 'content-type', 'content-length', 'if-match',
+'user-agent', 'x-request-id'
 ];
 }
-app.conf.log_header_whitelist = new RegExp('^(?:' + 
app.conf.log_header_whitelist.map(function(item) {
+app.conf.log_header_whitelist 

[MediaWiki-commits] [Gerrit] mediawiki...eventstreams[master]: Fixes for updated linter

2018-01-10 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403461 )

Change subject: Fixes for updated linter
..

Fixes for updated linter

Bug: T171011
Change-Id: I68bc0200d51b4a82f36f4116cd7ef06c45a48a43
---
M lib/eventstreams-util.js
M routes/stream.js
2 files changed, 2 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/mediawiki/services/eventstreams 
refs/changes/61/403461/1

diff --git a/lib/eventstreams-util.js b/lib/eventstreams-util.js
index 71e3535..0b4a3bc 100644
--- a/lib/eventstreams-util.js
+++ b/lib/eventstreams-util.js
@@ -128,14 +128,14 @@
  */
 class IntervalCounter {
 /**
- * @param {function} cb cb function that takes key, value.
+ * @param {callback} cb cb function that takes key, value.
  *  This will be called for each stored 
counter
  * @param {integer}  intervalMs cb will be called for every key, count 
this often.
  *  Default: 5000
  * @param {boolean}  shouldResetIf true, each stored counter will be 
nulled every
  *  interval ms.  Default: false
  *
- * @constructor
+ * @class
  */
 constructor(cb, intervalMs, shouldReset) {
 this.cb  = cb;
diff --git a/routes/stream.js b/routes/stream.js
index a7c53fb..b48b701 100644
--- a/routes/stream.js
+++ b/routes/stream.js
@@ -2,7 +2,6 @@
 
 const os = require('os');
 const kafkaSse = require('kafka-sse');
-const rdkafkaStatsd = require('node-rdkafka-statsd');
 
 const sUtil = require('../lib/util');
 const eUtil = require('../lib/eventstreams-util');

-- 
To view, visit https://gerrit.wikimedia.org/r/403461
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I68bc0200d51b4a82f36f4116cd7ef06c45a48a43
Gerrit-PatchSet: 1
Gerrit-Project: mediawiki/services/eventstreams
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] mediawiki...eventstreams[master]: Squash merge service-template-node v0.5.4 and fix conflicts

2018-01-10 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403460 )

Change subject: Squash merge service-template-node v0.5.4 and fix conflicts
..

Squash merge service-template-node v0.5.4 and fix conflicts

Bug: T171011

Conflicts:
.travis.yml
app.js
lib/api-util.js
lib/swagger-ui.js
lib/util.js
package.json
routes/ex.js
routes/root.js
routes/v1.js
test/features/app/app.js
test/features/app/spec.js
test/features/ex/errors.js
test/features/v1/page.js
test/features/v1/siteinfo.js

Change-Id: Ib931ccff5b4175fd8752c5b92e98860ffc074ea7
---
A .nsprc
M .travis.yml
M app.js
M lib/swagger-ui.js
M lib/util.js
M package.json
M server.js
M test/features/app/app.js
M test/features/app/spec.js
M test/features/info/info.js
M test/utils/assert.js
M test/utils/logStream.js
M test/utils/server.js
13 files changed, 380 insertions(+), 346 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/mediawiki/services/eventstreams 
refs/changes/60/403460/1

diff --git a/.nsprc b/.nsprc
new file mode 100644
index 000..98b2ef4
--- /dev/null
+++ b/.nsprc
@@ -0,0 +1,5 @@
+{
+  "exceptions": [
+"https://nodesecurity.io/advisories/532;
+  ]
+}
diff --git a/.travis.yml b/.travis.yml
index df202fa..b6a9e5d 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -5,3 +5,5 @@
 node_js:
   - "4"
   - "6"
+  - "8"
+  - "node"
diff --git a/app.js b/app.js
index 21392ca..582d6bf 100644
--- a/app.js
+++ b/app.js
@@ -1,17 +1,16 @@
 'use strict';
 
-
-require('core-js/shim');
-
-var http = require('http');
-var BBPromise = require('bluebird');
-var express = require('express');
-var bodyParser = require('body-parser');
-var fs = BBPromise.promisifyAll(require('fs'));
-var sUtil = require('./lib/util');
-var packageInfo = require('./package.json');
-var yaml = require('js-yaml');
-var SwaggerParser = require('swagger-parser');
+const http = require('http');
+const BBPromise = require('bluebird');
+const express = require('express');
+const compression = require('compression');
+const bodyParser = require('body-parser');
+const fs = BBPromise.promisifyAll(require('fs'));
+const sUtil = require('./lib/util');
+const packageInfo = require('./package.json');
+const yaml = require('js-yaml');
+const addShutdown = require('http-shutdown');
+const SwaggerParser = require('swagger-parser');
 
 /**
  * Creates an express app and initialises it
@@ -21,7 +20,7 @@
 function initApp(options) {
 
 // the main application object
-var app = express();
+const app = express();
 
 // get the options and make them available in the app
 app.logger = options.logger;// the logging device
@@ -30,22 +29,22 @@
 app.info = packageInfo; // this app's package info
 
 // ensure some sane defaults
-if(!app.conf.port) { app.conf.port = ; }
-if(!app.conf.interface) { app.conf.interface = '0.0.0.0'; }
-if(app.conf.compression_level === undefined) { app.conf.compression_level 
= 3; }
-if(app.conf.cors === undefined) { app.conf.cors = '*'; }
-if(app.conf.csp === undefined) {
-app.conf.csp =
-"default-src 'self'; object-src 'none'; media-src *; img-src *; 
style-src *; frame-ancestors 'self'";
+if (!app.conf.port) { app.conf.port = ; }
+if (!app.conf.interface) { app.conf.interface = '0.0.0.0'; }
+if (app.conf.compression_level === undefined) { app.conf.compression_level 
= 3; }
+if (app.conf.cors === undefined) { app.conf.cors = '*'; }
+if (app.conf.csp === undefined) {
+// eslint-disable-next-line max-len
+app.conf.csp = "default-src 'self'; object-src 'none'; media-src *; 
img-src *; style-src *; frame-ancestors 'self'";
 }
 
 // set outgoing proxy
-if(app.conf.proxy) {
+if (app.conf.proxy) {
 process.env.HTTP_PROXY = app.conf.proxy;
 // if there is a list of domains which should
 // not be proxied, set it
-if(app.conf.no_proxy_list) {
-if(Array.isArray(app.conf.no_proxy_list)) {
+if (app.conf.no_proxy_list) {
+if (Array.isArray(app.conf.no_proxy_list)) {
 process.env.NO_PROXY = app.conf.no_proxy_list.join(',');
 } else {
 process.env.NO_PROXY = app.conf.no_proxy_list;
@@ -54,32 +53,32 @@
 }
 
 // set up header whitelisting for logging
-if(!app.conf.log_header_whitelist) {
+if (!app.conf.log_header_whitelist) {
 app.conf.log_header_whitelist = [
-'cache-control', 'content-type', 'content-length', 'if-match',
-'user-agent', 'x-request-id'
+'cache-control', 'content-type', 'content-length', 'if-match',
+'user-agent', 'x-request-id'
 ];
 }
-app.conf.log_header_whitelist = new RegExp('^(?:' + 
app.conf.log_header_whitelist.map(function(item) {
+  

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Set jdk.certpath.disabledAlgorithms in java.security on Kafk...

2018-01-10 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403415 )

Change subject: Set jdk.certpath.disabledAlgorithms in java.security on Kafka 
brokers
..


Set jdk.certpath.disabledAlgorithms in java.security on Kafka brokers

Bug: T182993
Change-Id: I1a2d1a30a4430d3d678e8a274b251c474b435a61
---
A modules/profile/files/kafka/java.security
M modules/profile/manifests/kafka/broker.pp
2 files changed, 926 insertions(+), 0 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
new file mode 100644
index 000..e93174d
--- /dev/null
+++ b/modules/profile/files/kafka/java.security
@@ -0,0 +1,916 @@
+# NOTE: This file is managed by Puppet.
+#
+# This is the "master security properties file".
+#
+# An alternate java.security properties file may be specified
+# from the command line via the system property
+#
+#-Djava.security.properties=
+#
+# This properties file appends to the master security properties file.
+# If both properties files specify values for the same key, the value
+# from the command-line properties file is selected, as it is the last
+# one loaded.
+#
+# Also, if you specify
+#
+#-Djava.security.properties== (2 equals),
+#
+# then that properties file completely overrides the master security
+# properties file.
+#
+# To disable the ability to specify an additional properties file from
+# the command line, set the key security.overridePropertiesFile
+# to false in the master security properties file. It is set to true
+# by default.
+
+# In this file, various security properties are set for use by
+# java.security classes. This is where users can statically register
+# Cryptography Package Providers ("providers" for short). The term
+# "provider" refers to a package or set of packages that supply a
+# concrete implementation of a subset of the cryptography aspects of
+# the Java Security API. A provider may, for example, implement one or
+# more digital signature algorithms or message digest algorithms.
+#
+# Each provider must implement a subclass of the Provider class.
+# To register a provider in this master security properties file,
+# specify the Provider subclass name and priority in the format
+#
+#security.provider.=
+#
+# This declares a provider, and specifies its preference
+# order n. The preference order is the order in which providers are
+# searched for requested algorithms (when no specific provider is
+# requested). The order is 1-based; 1 is the most preferred, followed
+# by 2, and so on.
+#
+#  must specify the subclass of the Provider class whose
+# constructor sets the values of various properties that are required
+# for the Java Security API to look up the algorithms or other
+# facilities implemented by the provider.
+#
+# There must be at least one provider specification in java.security.
+# There is a default provider that comes standard with the JDK. It
+# is called the "SUN" provider, and its Provider subclass
+# named Sun appears in the sun.security.provider package. Thus, the
+# "SUN" provider is registered via the following:
+#
+#security.provider.1=sun.security.provider.Sun
+#
+# (The number 1 is used for the default provider.)
+#
+# Note: Providers can be dynamically registered instead by calls to
+# either the addProvider or insertProviderAt method in the Security
+# class.
+
+#
+# List of providers and their preference orders (see above):
+#
+security.provider.1=sun.security.provider.Sun
+security.provider.2=sun.security.rsa.SunRsaSign
+security.provider.3=sun.security.ec.SunEC
+security.provider.4=com.sun.net.ssl.internal.ssl.Provider
+security.provider.5=com.sun.crypto.provider.SunJCE
+security.provider.6=sun.security.jgss.SunProvider
+security.provider.7=com.sun.security.sasl.Provider
+security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
+security.provider.9=sun.security.smartcardio.SunPCSC
+
+#
+# Sun Provider SecureRandom seed source.
+#
+# Select the primary source of seed data for the "SHA1PRNG" and
+# "NativePRNG" SecureRandom implementations in the "Sun" provider.
+# (Other SecureRandom implementations might also use this property.)
+#
+# On Unix-like systems (for example, Solaris/Linux/MacOS), the
+# "NativePRNG" and "SHA1PRNG" implementations obtains seed data from
+# special device files such as file:/dev/random.
+#
+# On Windows systems, specifying the URLs "file:/dev/random" or
+# "file:/dev/urandom" will enable the native Microsoft CryptoAPI seeding
+# mechanism for SHA1PRNG.
+#
+# By default, an attempt is made to use the entropy gathering device
+# specified by the "securerandom.source" Security property.  If an
+# exception occurs while accessing the specified URL:
+#
+# SHA1PRNG:
+# the traditional system/thread activity algorithm will be used.
+#
+# NativePRNG:
+# a default 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Set jdk.certpath.disabledAlgorithms in java.security on Kafk...

2018-01-10 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403415 )

Change subject: Set jdk.certpath.disabledAlgorithms in java.security on Kafka 
brokers
..

Set jdk.certpath.disabledAlgorithms in java.security on Kafka brokers

Bug: T182993
Change-Id: I1a2d1a30a4430d3d678e8a274b251c474b435a61
---
A modules/profile/files/kafka/java.security
M modules/profile/manifests/kafka/broker.pp
2 files changed, 926 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/15/403415/1

diff --git a/modules/profile/files/kafka/java.security 
b/modules/profile/files/kafka/java.security
new file mode 100644
index 000..e93174d
--- /dev/null
+++ b/modules/profile/files/kafka/java.security
@@ -0,0 +1,916 @@
+# NOTE: This file is managed by Puppet.
+#
+# This is the "master security properties file".
+#
+# An alternate java.security properties file may be specified
+# from the command line via the system property
+#
+#-Djava.security.properties=
+#
+# This properties file appends to the master security properties file.
+# If both properties files specify values for the same key, the value
+# from the command-line properties file is selected, as it is the last
+# one loaded.
+#
+# Also, if you specify
+#
+#-Djava.security.properties== (2 equals),
+#
+# then that properties file completely overrides the master security
+# properties file.
+#
+# To disable the ability to specify an additional properties file from
+# the command line, set the key security.overridePropertiesFile
+# to false in the master security properties file. It is set to true
+# by default.
+
+# In this file, various security properties are set for use by
+# java.security classes. This is where users can statically register
+# Cryptography Package Providers ("providers" for short). The term
+# "provider" refers to a package or set of packages that supply a
+# concrete implementation of a subset of the cryptography aspects of
+# the Java Security API. A provider may, for example, implement one or
+# more digital signature algorithms or message digest algorithms.
+#
+# Each provider must implement a subclass of the Provider class.
+# To register a provider in this master security properties file,
+# specify the Provider subclass name and priority in the format
+#
+#security.provider.=
+#
+# This declares a provider, and specifies its preference
+# order n. The preference order is the order in which providers are
+# searched for requested algorithms (when no specific provider is
+# requested). The order is 1-based; 1 is the most preferred, followed
+# by 2, and so on.
+#
+#  must specify the subclass of the Provider class whose
+# constructor sets the values of various properties that are required
+# for the Java Security API to look up the algorithms or other
+# facilities implemented by the provider.
+#
+# There must be at least one provider specification in java.security.
+# There is a default provider that comes standard with the JDK. It
+# is called the "SUN" provider, and its Provider subclass
+# named Sun appears in the sun.security.provider package. Thus, the
+# "SUN" provider is registered via the following:
+#
+#security.provider.1=sun.security.provider.Sun
+#
+# (The number 1 is used for the default provider.)
+#
+# Note: Providers can be dynamically registered instead by calls to
+# either the addProvider or insertProviderAt method in the Security
+# class.
+
+#
+# List of providers and their preference orders (see above):
+#
+security.provider.1=sun.security.provider.Sun
+security.provider.2=sun.security.rsa.SunRsaSign
+security.provider.3=sun.security.ec.SunEC
+security.provider.4=com.sun.net.ssl.internal.ssl.Provider
+security.provider.5=com.sun.crypto.provider.SunJCE
+security.provider.6=sun.security.jgss.SunProvider
+security.provider.7=com.sun.security.sasl.Provider
+security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
+security.provider.9=sun.security.smartcardio.SunPCSC
+
+#
+# Sun Provider SecureRandom seed source.
+#
+# Select the primary source of seed data for the "SHA1PRNG" and
+# "NativePRNG" SecureRandom implementations in the "Sun" provider.
+# (Other SecureRandom implementations might also use this property.)
+#
+# On Unix-like systems (for example, Solaris/Linux/MacOS), the
+# "NativePRNG" and "SHA1PRNG" implementations obtains seed data from
+# special device files such as file:/dev/random.
+#
+# On Windows systems, specifying the URLs "file:/dev/random" or
+# "file:/dev/urandom" will enable the native Microsoft CryptoAPI seeding
+# mechanism for SHA1PRNG.
+#
+# By default, an attempt is made to use the entropy gathering device
+# specified by the "securerandom.source" Security property.  If an
+# exception occurs while accessing the specified URL:
+#
+# SHA1PRNG:
+# the traditional system/thread activity algorithm will be used.
+#
+# NativePRNG:

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use hadoop cluster name variable in camus templates

2018-01-09 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403206 )

Change subject: Use hadoop cluster name variable in camus templates
..


Use hadoop cluster name variable in camus templates

This lets camus be puppetized in labs

Bug: T166248
Change-Id: I164c84408110a1ffebc169ff0800720ed2b192fa
---
M modules/camus/templates/eventbus.erb
M modules/camus/templates/eventlogging.erb
M modules/camus/templates/mediawiki.erb
M modules/camus/templates/mediawiki_job.erb
M modules/camus/templates/webrequest.erb
M modules/profile/manifests/analytics/refinery/job/camus.pp
6 files changed, 23 insertions(+), 19 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/camus/templates/eventbus.erb 
b/modules/camus/templates/eventbus.erb
index cf294bb..3405fb3 100644
--- a/modules/camus/templates/eventbus.erb
+++ b/modules/camus/templates/eventbus.erb
@@ -10,17 +10,17 @@
 
 # final top-level data output directory, sub-directory will be dynamically
 # created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/event
+etl.destination.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/data/raw/event
 
 # Allow overwrites of previously imported files in etl.destination.path
 etl.destination.overwrite=true
 
 # HDFS location where you want to keep execution files, i.e. offsets,
 # error logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/eventbus
+etl.execution.base.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/camus/eventbus
 
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/eventbus/history
+etl.execution.history.path=hdfs://<%= 
@template_variables['hadoop_cluster_name'] %>/wmf/camus/eventbus/history
 
 # Our
 # Our timestamps look like 2013-09-20T15:40:17+00:00
diff --git a/modules/camus/templates/eventlogging.erb 
b/modules/camus/templates/eventlogging.erb
index 9397226..bed1b9d 100644
--- a/modules/camus/templates/eventlogging.erb
+++ b/modules/camus/templates/eventlogging.erb
@@ -8,12 +8,12 @@
 mapreduce.job.queuename=default
 
 # final top-level data output directory, sub-directory will be dynamically 
created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/eventlogging
+etl.destination.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/data/raw/eventlogging
 etl.destination.overwrite=true
 # HDFS location where you want to keep execution files, i.e. offsets, error 
logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/eventlogging
+etl.execution.base.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/camus/eventlogging
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/eventlogging/history
+etl.execution.history.path=hdfs://<%= 
@template_variables['hadoop_cluster_name'] %>/wmf/camus/eventlogging/history
 
 # Our timestamps look like 2013-09-20T15:40:17
 camus.message.timestamp.format=-MM-dd'T'HH:mm:ss
diff --git a/modules/camus/templates/mediawiki.erb 
b/modules/camus/templates/mediawiki.erb
index 2fa12d4..aff3e7f 100644
--- a/modules/camus/templates/mediawiki.erb
+++ b/modules/camus/templates/mediawiki.erb
@@ -8,12 +8,12 @@
 mapreduce.job.queuename=essential
 
 # final top-level data output directory, sub-directory will be dynamically 
created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/mediawiki
+etl.destination.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/data/raw/mediawiki
 etl.destination.overwrite=true
 # HDFS location where you want to keep execution files, i.e. offsets, error 
logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/mediawiki
+etl.execution.base.path=hdfs://<%= @template_variables['hadoop_cluster_name'] 
%>/wmf/camus/mediawiki
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/mediawiki/history
+etl.execution.history.path=hdfs://<%= 
@template_variables['hadoop_cluster_name'] %>/wmf/camus/mediawiki/history
 
 # Concrete implementation of the Decoder class to use.
 
camus.message.decoder.class=org.wikimedia.analytics.refinery.camus.coders.AvroBinaryMessageDecoder
diff --git a/modules/camus/templates/mediawiki_job.erb 
b/modules/camus/templates/mediawiki_job.erb
index 528ecbf..a0f2f09 100644
--- a/modules/camus/templates/mediawiki_job.erb
+++ b/modules/camus/templates/mediawiki_job.erb
@@ -10,17 +10,17 @@
 
 # final top-level data output directory, sub-directory will be dynamically
 # created for each topic pulled

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use hadoop cluster name variable in camus templates

2018-01-09 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403206 )

Change subject: Use hadoop cluster name variable in camus templates
..

Use hadoop cluster name variable in camus templates

This lets camus be puppetized in labs

Bug: T166248

Change-Id: I164c84408110a1ffebc169ff0800720ed2b192fa
---
M modules/camus/templates/eventbus.erb
M modules/camus/templates/eventlogging.erb
M modules/camus/templates/mediawiki.erb
M modules/camus/templates/mediawiki_job.erb
M modules/camus/templates/webrequest.erb
M modules/profile/manifests/analytics/refinery/job/camus.pp
6 files changed, 18 insertions(+), 15 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/06/403206/1

diff --git a/modules/camus/templates/eventbus.erb 
b/modules/camus/templates/eventbus.erb
index cf294bb..3070cb9 100644
--- a/modules/camus/templates/eventbus.erb
+++ b/modules/camus/templates/eventbus.erb
@@ -10,17 +10,17 @@
 
 # final top-level data output directory, sub-directory will be dynamically
 # created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/event
+etl.destination.path=hdfs://<%= @hadoop_cluster_name %>/wmf/data/raw/event
 
 # Allow overwrites of previously imported files in etl.destination.path
 etl.destination.overwrite=true
 
 # HDFS location where you want to keep execution files, i.e. offsets,
 # error logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/eventbus
+etl.execution.base.path=hdfs://<%= @hadoop_cluster_name %>/wmf/camus/eventbus
 
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/eventbus/history
+etl.execution.history.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/camus/eventbus/history
 
 # Our
 # Our timestamps look like 2013-09-20T15:40:17+00:00
diff --git a/modules/camus/templates/eventlogging.erb 
b/modules/camus/templates/eventlogging.erb
index 9397226..009a9d9 100644
--- a/modules/camus/templates/eventlogging.erb
+++ b/modules/camus/templates/eventlogging.erb
@@ -8,12 +8,12 @@
 mapreduce.job.queuename=default
 
 # final top-level data output directory, sub-directory will be dynamically 
created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/eventlogging
+etl.destination.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/data/raw/eventlogging
 etl.destination.overwrite=true
 # HDFS location where you want to keep execution files, i.e. offsets, error 
logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/eventlogging
+etl.execution.base.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/camus/eventlogging
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/eventlogging/history
+etl.execution.history.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/camus/eventlogging/history
 
 # Our timestamps look like 2013-09-20T15:40:17
 camus.message.timestamp.format=-MM-dd'T'HH:mm:ss
diff --git a/modules/camus/templates/mediawiki.erb 
b/modules/camus/templates/mediawiki.erb
index 2fa12d4..1293b12 100644
--- a/modules/camus/templates/mediawiki.erb
+++ b/modules/camus/templates/mediawiki.erb
@@ -8,12 +8,12 @@
 mapreduce.job.queuename=essential
 
 # final top-level data output directory, sub-directory will be dynamically 
created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/mediawiki
+etl.destination.path=hdfs://<%= @hadoop_cluster_name %>/wmf/data/raw/mediawiki
 etl.destination.overwrite=true
 # HDFS location where you want to keep execution files, i.e. offsets, error 
logs, and count files
-etl.execution.base.path=hdfs://analytics-hadoop/wmf/camus/mediawiki
+etl.execution.base.path=hdfs://<%= @hadoop_cluster_name %>/wmf/camus/mediawiki
 # where completed Camus job output directories are kept, usually a sub-dir in 
the base.path
-etl.execution.history.path=hdfs://analytics-hadoop/wmf/camus/mediawiki/history
+etl.execution.history.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/camus/mediawiki/history
 
 # Concrete implementation of the Decoder class to use.
 
camus.message.decoder.class=org.wikimedia.analytics.refinery.camus.coders.AvroBinaryMessageDecoder
diff --git a/modules/camus/templates/mediawiki_job.erb 
b/modules/camus/templates/mediawiki_job.erb
index 528ecbf..2babc94 100644
--- a/modules/camus/templates/mediawiki_job.erb
+++ b/modules/camus/templates/mediawiki_job.erb
@@ -10,17 +10,17 @@
 
 # final top-level data output directory, sub-directory will be dynamically
 # created for each topic pulled
-etl.destination.path=hdfs://analytics-hadoop/wmf/data/raw/mediawiki_job
+etl.destination.path=hdfs://<%= @hadoop_cluster_name 
%>/wmf/data/raw/mediawiki_job
 
 # Allow overwrites of previously imported files in etl.destination.path
 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add $monitoring_enabled parameter to cache::kafka::webreques...

2018-01-09 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403185 )

Change subject: Add $monitoring_enabled parameter to cache::kafka::webrequest 
profile
..

Add $monitoring_enabled parameter to cache::kafka::webrequest profile

This should be a no-op.

The cache::kafka::webrequest profile is included in cache::base profile,
which is in turn included by the cache role classes.  As such, we set
this parameter in each cache role hiera.

Change-Id: I86dc34d21bc990ddccc94d5ab43a1763c6ada6d0
---
M hieradata/role/common/cache/canary.yaml
M hieradata/role/common/cache/misc.yaml
M hieradata/role/common/cache/text.yaml
M hieradata/role/common/cache/upload.yaml
M modules/profile/manifests/cache/kafka/webrequest.pp
5 files changed, 58 insertions(+), 32 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/85/403185/1

diff --git a/hieradata/role/common/cache/canary.yaml 
b/hieradata/role/common/cache/canary.yaml
index 40bb4c2..c943922 100644
--- a/hieradata/role/common/cache/canary.yaml
+++ b/hieradata/role/common/cache/canary.yaml
@@ -94,4 +94,14 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
+
+# This should match an entry in the kafka_clusters hash (defined in 
common.yaml).
+# We use the fully qualified kafka cluster name (with datacenter), because we 
want
+# to route all statsv -> statsd traffic to the datacenter that hosts the master
+# statsd instance.  If the active statsd instance changes to codfw (for an 
extended period of time)
+# should probably change this to main-codfw.  If you don't things will 
probably be fine,
+# but statsv will have to send messages over UDP cross-DC to the active statsd 
instance.
 profile::cache::kafka::statsv::kafka_cluster_name: main-eqiad
diff --git a/hieradata/role/common/cache/misc.yaml 
b/hieradata/role/common/cache/misc.yaml
index 3f552b5..47b242b 100644
--- a/hieradata/role/common/cache/misc.yaml
+++ b/hieradata/role/common/cache/misc.yaml
@@ -305,3 +305,6 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
diff --git a/hieradata/role/common/cache/text.yaml 
b/hieradata/role/common/cache/text.yaml
index 40e5c5d..a319c17 100644
--- a/hieradata/role/common/cache/text.yaml
+++ b/hieradata/role/common/cache/text.yaml
@@ -100,6 +100,9 @@
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
 
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
+
 # This should match an entry in the kafka_clusters hash (defined in 
common.yaml).
 # We use the fully qualified kafka cluster name (with datacenter), because we 
want
 # to route all statsv -> statsd traffic to the datacenter that hosts the master
diff --git a/hieradata/role/common/cache/upload.yaml 
b/hieradata/role/common/cache/upload.yaml
index b5c97ec..5f32a80 100644
--- a/hieradata/role/common/cache/upload.yaml
+++ b/hieradata/role/common/cache/upload.yaml
@@ -71,3 +71,6 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# Enable varnishkafka-webrequest instance monitoring.
+profile::cache::kafka::webrequest::monitoring_enabled: true
diff --git a/modules/profile/manifests/cache/kafka/webrequest.pp 
b/modules/profile/manifests/cache/kafka/webrequest.pp
index 6c4a17c..5ffdb16 100644
--- a/modules/profile/manifests/cache/kafka/webrequest.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest.pp
@@ -5,6 +5,9 @@
 #
 # === Parameters
 #
+# [*monitoring_enabled*]
+#   True if the varnishkafka instance should be monitored.
+#
 # [*cache_cluster*]
 #   the name of the cache cluster
 #
@@ -12,8 +15,9 @@
 #   the host to send statsd data to.
 #
 class profile::cache::kafka::webrequest(
-$cache_cluster = hiera('cache::cluster'),
-$statsd_host = hiera('statsd'),
+$monitoring_enabled = 
hiera('profile::cache::kafka::webrequest::monitoring_enabled', false),
+$cache_cluster  = hiera('cache::cluster'),
+$statsd_host= hiera('statsd'),
 ) {
 $config = kafka_config('analytics')
 # NOTE: This is used by inheriting classes role::cache::kafka::*
@@ -120,38 +124,41 @@
 force_protocol_version   => $kafka_protocol_version,
 }
 
-# Generate icinga alert if varnishkafka is not running.
-nrpe::monitor_service { 'varnishkafka-webrequest':
-description   => 'Webrequests Varnishkafka log producer',
-nrpe_command  => "/usr/lib/nagios/plugins/check_procs -c 1 -a 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Tweaks to profile::cache::kafka::webrequest::jumbo test

2018-01-09 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403064 )

Change subject: Tweaks to profile::cache::kafka::webrequest::jumbo test
..


Tweaks to profile::cache::kafka::webrequest::jumbo test

- rename $statsd_host to $statsd

- Use unqualified 'jumbo' kafka cluster name, this will make it easier to test 
in labs.

- Add $kafka_cluster_name parameter.

- remove force_protocol_version, this was set to 0.9.0.1  Since Kafka 0.10,
  librdkafka should be able to properly negotiate the protocol version with 
Kafka.
  This will change the way varnishkafka has been producing to jumbo for our 
tests.

Change-Id: I86e8573aeafca185821b5932a259a6290d4fe9d2
---
M modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
1 file changed, 14 insertions(+), 21 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  Elukey: Looks good to me, but someone else must approve



diff --git a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp 
b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
index ae5cbc8..2f9db2e 100644
--- a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
@@ -7,34 +7,32 @@
 #
 # === Parameters
 #
+# [*kafka_cluster_name*]
+#   Name of the Kafka cluster in the kafka_clusters hash to be passed to the
+#   kafka_config() function.  Default: jumbo.
+#
 # [*cache_cluster*]
 #   the name of the cache cluster
 #
-# [*statsd_host*]
-#   the host to send statsd data to.
-#
-# [*ssl_key_password*]
-#   the password to decrypt the TLS client certificate.
+# [*statsd*]
+#   The host:port to send statsd data to.
 #
 class profile::cache::kafka::webrequest::jumbo(
-$cache_cluster = hiera('cache::cluster'),
-$statsd_host   = hiera('statsd'),
+$kafka_cluster_name = 
hiera('profile::cache::kafka::webrequest::jumbo::kafka_cluster_name', 'jumbo'),
+$cache_cluster  = hiera('cache::cluster'),
+$statsd = hiera('statsd'),
 ) {
 # Include this class to get key and certificate for varnishkafka
 # to produce to Kafka over SSL/TLS.
 require ::profile::cache::kafka::certificate
 
-$config = kafka_config('jumbo-eqiad')
-
+$config = kafka_config($kafka_cluster_name)
 # Array of kafka brokers in jumbo-eqiad with SSL port 9093
 $kafka_brokers = $config['brokers']['ssl_array']
 
-$topic = "webrequest_${cache_cluster}_test"
-# These used to be parameters, but I don't really see why given we never 
change
-# them
-$varnish_name   = 'frontend'
-$varnish_svc_name   = 'varnish-frontend'
-$kafka_protocol_version = '0.9.0.1'
+$topic= "webrequest_${cache_cluster}_test"
+$varnish_name = 'frontend'
+$varnish_svc_name = 'varnish-frontend'
 
 # For any info about the following settings, please check
 # profile::cache::kafka::webrequest.
@@ -58,10 +56,7 @@
 $peak_rps_estimate = 9000
 
 varnishkafka::instance { 'webrequest-jumbo-duplicate':
-# FIXME - top-scope var without namespace, will break in puppet 2.8
-# lint:ignore:variable_scope
 brokers  => $kafka_brokers,
-# lint:endignore
 topic=> $topic,
 format_type  => 'json',
 compression_codec=> 'snappy',
@@ -91,7 +86,6 @@
 # this often.  This is set at 15 so that
 # stats will be fresh when polled from gmetad.
 log_statistics_interval  => 15,
-force_protocol_version   => $kafka_protocol_version,
 #TLS/SSL config
 ssl_enabled  => true,
 ssl_ca_location  => 
$::profile::cache::kafka::certificate::ssl_ca_location,
@@ -107,11 +101,10 @@
 # and report metrics to statsd.
 varnishkafka::monitor::statsd { 'webrequest-jumbo-duplicate':
 graphite_metric_prefix => $graphite_metric_prefix,
-statsd_host_port   => $statsd_host,
+statsd_host_port   => $statsd,
 }
 
 # Make sure varnishes are configured and started for the first time
 # before the instances as well, or they fail to start initially...
 Service <| tag == 'varnish_instance' |> -> 
Varnishkafka::Instance['webrequest-jumbo-duplicate']
-
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/403064
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I86e8573aeafca185821b5932a259a6290d4fe9d2
Gerrit-PatchSet: 6
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Create profile::cache::kafka::certificate to DRY require of ...

2018-01-09 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403059 )

Change subject: Create profile::cache::kafka::certificate to DRY require of cert
..


Create profile::cache::kafka::certificate to DRY require of cert

This should be a no-op

Bug: T175461
Change-Id: I1c73a7bd93f1e98253b0839ba57ca36b3252a27c
---
A modules/profile/manifests/cache/kafka/certificate.pp
M modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
2 files changed, 62 insertions(+), 52 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  Elukey: Looks good to me, but someone else must approve
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/cache/kafka/certificate.pp 
b/modules/profile/manifests/cache/kafka/certificate.pp
new file mode 100644
index 000..14505b9
--- /dev/null
+++ b/modules/profile/manifests/cache/kafka/certificate.pp
@@ -0,0 +1,53 @@
+# == Class profile::cache::kafka::certs
+# Installs certificates and keys for varnishkafka to produce to Kafka over TLS.
+# This expects that a 'varnishkafka' SSL/TLS key and certificate is created by 
Cergen and
+# signed by our PuppetCA, and available in the Puppet private secrets module.
+# == Parameters.
+# [*ssl_key_password*]
+#   The password to decrypt the TLS client certificate.  Default: undef
+#
+class profile::cache::kafka::certificate(
+$ssl_key_password  = 
hiera('profile::cache::kafka::certificate::ssl_key_password', undef),
+) {
+# TLS/SSL configuration
+$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
+$ssl_location = '/etc/varnishkafka/ssl'
+$ssl_location_private = '/etc/varnishkafka/ssl/private'
+
+$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
+$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
+
+$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
+$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
+$ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
+
+file { $ssl_location:
+ensure => 'directory',
+owner  => 'root',
+group  => 'root',
+mode   => '0555',
+}
+
+file { $ssl_location_private:
+ensure  => 'directory',
+owner   => 'root',
+group   => 'root',
+mode=> '0500',
+require => File[$ssl_location],
+}
+
+file { $ssl_key_location:
+content => secret($ssl_key_location_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0400',
+require => File[$ssl_location_private],
+}
+
+file { $ssl_certificate_location:
+content => secret($ssl_certificate_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0444',
+}
+}
diff --git a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp 
b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
index c97f74b..ae5cbc8 100644
--- a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
@@ -19,8 +19,11 @@
 class profile::cache::kafka::webrequest::jumbo(
 $cache_cluster = hiera('cache::cluster'),
 $statsd_host   = hiera('statsd'),
-$ssl_key_password  = 
hiera('profile::cache::kafka::webrequest::jumbo::ssl_key_password', undef),
 ) {
+# Include this class to get key and certificate for varnishkafka
+# to produce to Kafka over SSL/TLS.
+require ::profile::cache::kafka::certificate
+
 $config = kafka_config('jumbo-eqiad')
 
 # Array of kafka brokers in jumbo-eqiad with SSL port 9093
@@ -53,48 +56,6 @@
 # have mutiple DCs depooled in DNS and ~8 servers in the remaining DC to
 # split traffic, we could peak at ~9000
 $peak_rps_estimate = 9000
-
-# TLS/SSL configuration
-$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
-$ssl_location = '/etc/varnishkafka/ssl'
-$ssl_location_private = '/etc/varnishkafka/ssl/private'
-
-$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
-$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
-
-$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
-$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
-$ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
-
-file { $ssl_location:
-ensure => 'directory',
-owner  => 'root',
-group  => 'root',
-mode   => '0555',
-}
-
-file { $ssl_location_private:
-ensure  => 'directory',
-owner   => 'root',
-group   => 'root',
-mode=> '0500',
-require => File[$ssl_location],
-}
-
-file { $ssl_key_location:
-content => secret($ssl_key_location_secrets_path),
-owner   => 'root',
-group   => 'root',
-

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Create profile::hadoop::apt_pin to ensure zookeeper is the c...

2018-01-09 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402370 )

Change subject: Create profile::hadoop::apt_pin to ensure zookeeper is the 
correct version
..


Create profile::hadoop::apt_pin to ensure zookeeper is the correct version

Change-Id: Ia5c1a15cc17cadc79272678491a6ed3c502053e2
---
A modules/profile/manifests/cdh/apt_pin.pp
M modules/profile/manifests/hadoop/master.pp
M modules/profile/manifests/hadoop/master/standby.pp
M modules/profile/manifests/hadoop/worker.pp
4 files changed, 32 insertions(+), 3 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/cdh/apt_pin.pp 
b/modules/profile/manifests/cdh/apt_pin.pp
new file mode 100644
index 000..4c74767
--- /dev/null
+++ b/modules/profile/manifests/cdh/apt_pin.pp
@@ -0,0 +1,17 @@
+# == Class profile::cdh::apt_pin
+#
+# Pins thirdparty/cloudera packages in our apt repo
+# to a higher priority than others.  This mainly exists
+# because both Debian and CDH have versions of zookeeper
+# that conflict.  Where this class is included, the
+# CDH version of zookeeper (and any other conflicting packages)
+# will be prefered.
+#
+class profile::cdh::apt_pin {
+require ::profile::cdh::apt
+
+apt::pin { 'thirdparty-cloudera':
+pin  => 'release c=thirdparty/cloudera',
+priority => '1002',
+}
+}
diff --git a/modules/profile/manifests/hadoop/master.pp 
b/modules/profile/manifests/hadoop/master.pp
index b846130..a903fbd 100644
--- a/modules/profile/manifests/hadoop/master.pp
+++ b/modules/profile/manifests/hadoop/master.pp
@@ -16,9 +16,13 @@
 $hadoop_user_groups   = 
hiera('profile::hadoop::master::hadoop_user_groups'),
 $statsd   = hiera('statsd'),
 ){
-
+# Hadoop masters need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::cdh::apt_pin
 include ::profile::hadoop::common
 
+# Force apt-get update to run before we try to install packages.
+Class['::profile::cdh::apt_pin'] -> Exec['apt-get update'] -> 
Class['::cdh::hadoop']
+
 class { '::cdh::hadoop::master': }
 
 # Use jmxtrans for sending metrics
diff --git a/modules/profile/manifests/hadoop/master/standby.pp 
b/modules/profile/manifests/hadoop/master/standby.pp
index ddbf1bb..a17fbb8 100644
--- a/modules/profile/manifests/hadoop/master/standby.pp
+++ b/modules/profile/manifests/hadoop/master/standby.pp
@@ -13,9 +13,13 @@
 $hadoop_namenode_heapsize = 
hiera('profile::hadoop::standby::namenode_heapsize', 2048),
 $statsd   = hiera('statsd'),
 ) {
-
+# Hadoop masters need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::cdh::apt_pin
 include ::profile::hadoop::common
 
+# Force apt-get update to run before we try to install packages.
+Class['::profile::cdh::apt_pin'] -> Exec['apt-get update'] -> 
Class['::cdh::hadoop']
+
 # Ensure that druid user exists on standby namenodes nodes.
 class { '::druid::cdh::hadoop::user':  }
 
diff --git a/modules/profile/manifests/hadoop/worker.pp 
b/modules/profile/manifests/hadoop/worker.pp
index 28b89db..7f32ff1 100644
--- a/modules/profile/manifests/hadoop/worker.pp
+++ b/modules/profile/manifests/hadoop/worker.pp
@@ -12,9 +12,13 @@
 $ferm_srange= hiera('profile::hadoop::worker::ferm_srange', 
'$DOMAIN_NETWORKS'),
 $statsd = hiera('statsd'),
 ) {
-
+# Hadoop workers need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::cdh::apt_pin
 include ::profile::hadoop::common
 
+# Force apt-get update to run before we try to install packages.
+Class['::profile::cdh::apt_pin'] -> Exec['apt-get update'] -> 
Class['::cdh::hadoop']
+
 # hive::client is nice to have for jobs launched
 # from random worker nodes as app masters so they
 # have access to hive-site.xml and other hive jars.

-- 
To view, visit https://gerrit.wikimedia.org/r/402370
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia5c1a15cc17cadc79272678491a6ed3c502053e2
Gerrit-PatchSet: 5
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: [WIP] Refactor cache::kafka::eventlogging into profile and e...

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403067 )

Change subject: [WIP] Refactor cache::kafka::eventlogging into profile and 
enable TLS
..

[WIP] Refactor cache::kafka::eventlogging into profile and enable TLS

Bug: T183297
Change-Id: I4096fe7efda237bac162dfb5dc8af1262c445503
---
A modules/profile/manifests/cache/kafka/eventlogging.pp
M modules/profile/manifests/cache/text.pp
D modules/role/manifests/cache/kafka/eventlogging.pp
M modules/role/manifests/cache/text.pp
4 files changed, 86 insertions(+), 74 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/67/403067/1

diff --git a/modules/profile/manifests/cache/kafka/eventlogging.pp 
b/modules/profile/manifests/cache/kafka/eventlogging.pp
new file mode 100644
index 000..2f4aa2f
--- /dev/null
+++ b/modules/profile/manifests/cache/kafka/eventlogging.pp
@@ -0,0 +1,79 @@
+# === Class profile::cache::kafka::eventlogging
+#
+# Sets up a varnishkafka logging endpoint for collecting
+# analytics events coming from external clients.
+#
+# TODO: This class is still in test mode
+#
+# More info: https://wikitech.wikimedia.org/wiki/Analytics/EventLogging
+#
+# === Parameters
+#
+# [*kafka_cluster_name*]
+#   Name of the Kafka cluster in the kafka_clusters hash to be passed to the
+#   kafka_config() function.  Default: jumbo.
+#
+# [*cache_cluster*]
+#   The name of the cache cluster.
+#
+# [*statsd*]
+#   The host to send statsd data to.
+#
+class profile::cache::kafka::eventlogging(
+$kafka_cluster_name = 
hiera('profile::cache::kafka::eventlogging::kafka_cluster_name', 'jumbo')
+$cache_cluster  = hiera('cache::cluster'),
+$statsd = hiera('statsd'),
+) {
+# Include this class to get key and certificate for varnishkafka
+# to produce to Kafka over SSL/TLS.
+require ::profile::cache::kafka::certificate
+
+# Set varnish.arg.q or varnish.arg.m according to Varnish version
+$varnish_opts = { 'q' => 'ReqURL ~ "^/(beacon/)?event(\.gif)?\?"' }
+
+$config = kafka_config($kafka_cluster_name)
+# Array of kafka brokers in jumbo-eqiad with SSL port 9093
+$kafka_brokers = $config['brokers']['ssl_array']
+
+$topic= "webrequest_${cache_cluster}_test"
+$varnish_name = 'frontend'
+$varnish_svc_name = 'varnish-frontend'
+
+varnishkafka::instance { 'eventlogging':
+brokers => $kafka_brokers,
+# Note that this format uses literal tab characters.
+# The '-' in this string used to be %{X-Client-IP@ip}o.
+# EventLogging clientIp logging has been removed as part of T128407.
+format  => '%q %l  %n  %{%FT%T}t   
-   "%{User-agent}i"',
+format_type => 'string',
+topic   => 'eventlogging-client-side',
+varnish_name=> $varnish_name,
+varnish_svc_name=> $varnish_svc_name,
+varnish_opts=> $varnish_opts,
+topic_request_required_acks => '1',
+}
+
+include ::standard
+
+# Generate icinga alert if varnishkafka is not running.
+nrpe::monitor_service { 'varnishkafka-eventlogging':
+description   => 'eventlogging Varnishkafka log producer',
+nrpe_command  => "/usr/lib/nagios/plugins/check_procs -c 1 -a 
'/usr/bin/varnishkafka -S /etc/varnishkafka/eventlogging.conf'",
+contact_group => 'admins,analytics',
+require   => Varnishkafka::Instance['eventlogging'],
+}
+
+$cache_type = hiera('cache::cluster')
+$graphite_metric_prefix = 
"varnishkafka.${::hostname}.eventlogging.${cache_cluster}"
+
+# Sets up Logster to read from the Varnishkafka instance stats JSON file
+# and report metrics to statsd.
+varnishkafka::monitor::statsd { 'eventlogging':
+graphite_metric_prefix => $graphite_metric_prefix,
+statsd_host_port   => $statsd,
+}
+
+# Make sure varnishes are configured and started for the first time
+# before the instances as well, or they fail to start initially...
+Service <| tag == 'varnish_instance' |> -> 
Varnishkafka::Instance['eventlogging']
+}
diff --git a/modules/profile/manifests/cache/text.pp 
b/modules/profile/manifests/cache/text.pp
index d4225b8..743d017 100644
--- a/modules/profile/manifests/cache/text.pp
+++ b/modules/profile/manifests/cache/text.pp
@@ -87,14 +87,6 @@
 backend_warming  => $backend_warming,
 }
 
-# varnishkafka eventlogging listens for eventlogging
-# requests and logs them to the eventlogging-client-side
-# topic.  EventLogging servers consume and process this
-# topic into many JSON based kafka topics for further
-# consumption.
-# TODO: Move this to profile, include from role::cache::text.
-class { '::role::cache::kafka::eventlogging': }
-
 # 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Tweaks to profile::cache::kafka::webrequest::jumbo test

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403064 )

Change subject: Tweaks to profile::cache::kafka::webrequest::jumbo test
..

Tweaks to profile::cache::kafka::webrequest::jumbo test

- rename $statsd_host to $statsd
- remove force_protocol_version, this was set to 0.9.0.1  Since Kafka 0.10,
  librdkafka should be able to properly negotiate the protocol version with 
Kafka.
  This will change the way varnishkafka has been producing to jumbo for our 
tests.

Change-Id: I86e8573aeafca185821b5932a259a6290d4fe9d2
---
M modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
1 file changed, 4 insertions(+), 12 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/64/403064/1

diff --git a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp 
b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
index ae5cbc8..cba9854 100644
--- a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
@@ -10,15 +10,15 @@
 # [*cache_cluster*]
 #   the name of the cache cluster
 #
-# [*statsd_host*]
-#   the host to send statsd data to.
+# [*statsd*]
+#   The host:port to send statsd data to.
 #
 # [*ssl_key_password*]
 #   the password to decrypt the TLS client certificate.
 #
 class profile::cache::kafka::webrequest::jumbo(
 $cache_cluster = hiera('cache::cluster'),
-$statsd_host   = hiera('statsd'),
+$statsd= hiera('statsd'),
 ) {
 # Include this class to get key and certificate for varnishkafka
 # to produce to Kafka over SSL/TLS.
@@ -30,11 +30,8 @@
 $kafka_brokers = $config['brokers']['ssl_array']
 
 $topic = "webrequest_${cache_cluster}_test"
-# These used to be parameters, but I don't really see why given we never 
change
-# them
 $varnish_name   = 'frontend'
 $varnish_svc_name   = 'varnish-frontend'
-$kafka_protocol_version = '0.9.0.1'
 
 # For any info about the following settings, please check
 # profile::cache::kafka::webrequest.
@@ -58,10 +55,7 @@
 $peak_rps_estimate = 9000
 
 varnishkafka::instance { 'webrequest-jumbo-duplicate':
-# FIXME - top-scope var without namespace, will break in puppet 2.8
-# lint:ignore:variable_scope
 brokers  => $kafka_brokers,
-# lint:endignore
 topic=> $topic,
 format_type  => 'json',
 compression_codec=> 'snappy',
@@ -91,7 +85,6 @@
 # this often.  This is set at 15 so that
 # stats will be fresh when polled from gmetad.
 log_statistics_interval  => 15,
-force_protocol_version   => $kafka_protocol_version,
 #TLS/SSL config
 ssl_enabled  => true,
 ssl_ca_location  => 
$::profile::cache::kafka::certificate::ssl_ca_location,
@@ -107,11 +100,10 @@
 # and report metrics to statsd.
 varnishkafka::monitor::statsd { 'webrequest-jumbo-duplicate':
 graphite_metric_prefix => $graphite_metric_prefix,
-statsd_host_port   => $statsd_host,
+statsd_host_port   => $statsd,
 }
 
 # Make sure varnishes are configured and started for the first time
 # before the instances as well, or they fail to start initially...
 Service <| tag == 'varnish_instance' |> -> 
Varnishkafka::Instance['webrequest-jumbo-duplicate']
-
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/403064
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I86e8573aeafca185821b5932a259a6290d4fe9d2
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] labs/private[master]: Mv varnishkafka profile certificate::ssl_key_password

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/403061 )

Change subject: Mv varnishkafka profile certificate::ssl_key_password
..


Mv varnishkafka profile certificate::ssl_key_password

Change-Id: Ie86b713a3e4dbbf9851ffd9555b60b11a502fa22
---
A hieradata/common/profile/cache/kafka/certificate.yaml
D hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
2 files changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/hieradata/common/profile/cache/kafka/certificate.yaml 
b/hieradata/common/profile/cache/kafka/certificate.yaml
new file mode 100644
index 000..6309e57
--- /dev/null
+++ b/hieradata/common/profile/cache/kafka/certificate.yaml
@@ -0,0 +1 @@
+profile::cache::kafka::certificate::ssl_key_password: 'this_is_not_a_secret'
diff --git a/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml 
b/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
deleted file mode 100644
index fb545c3..000
--- a/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
+++ /dev/null
@@ -1 +0,0 @@
-profile::cache::kafka::webrequest::jumbo::ssl_key_password: 
'this_is_not_a_secret'

-- 
To view, visit https://gerrit.wikimedia.org/r/403061
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ie86b713a3e4dbbf9851ffd9555b60b11a502fa22
Gerrit-PatchSet: 1
Gerrit-Project: labs/private
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] labs/private[master]: Mv varnishkafka profile certificate::ssl_key_password

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403061 )

Change subject: Mv varnishkafka profile certificate::ssl_key_password
..

Mv varnishkafka profile certificate::ssl_key_password

Change-Id: Ie86b713a3e4dbbf9851ffd9555b60b11a502fa22
---
A hieradata/common/profile/cache/kafka/certificate.yaml
D hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
2 files changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/labs/private 
refs/changes/61/403061/1

diff --git a/hieradata/common/profile/cache/kafka/certificate.yaml 
b/hieradata/common/profile/cache/kafka/certificate.yaml
new file mode 100644
index 000..6309e57
--- /dev/null
+++ b/hieradata/common/profile/cache/kafka/certificate.yaml
@@ -0,0 +1 @@
+profile::cache::kafka::certificate::ssl_key_password: 'this_is_not_a_secret'
diff --git a/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml 
b/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
deleted file mode 100644
index fb545c3..000
--- a/hieradata/common/profile/cache/kafka/webrequest/jumbo.yaml
+++ /dev/null
@@ -1 +0,0 @@
-profile::cache::kafka::webrequest::jumbo::ssl_key_password: 
'this_is_not_a_secret'

-- 
To view, visit https://gerrit.wikimedia.org/r/403061
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie86b713a3e4dbbf9851ffd9555b60b11a502fa22
Gerrit-PatchSet: 1
Gerrit-Project: labs/private
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Create profile::cache::kafka::certificate class to DRY requi...

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/403059 )

Change subject: Create profile::cache::kafka::certificate class to DRY require 
of varnishkafka TLS cert
..

Create profile::cache::kafka::certificate class to DRY require of varnishkafka 
TLS cert

This should be a no-op

Bug: T175461
Change-Id: I1c73a7bd93f1e98253b0839ba57ca36b3252a27c
---
A modules/profile/manifests/cache/kafka/certificate.pp
M modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
2 files changed, 62 insertions(+), 52 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/59/403059/1

diff --git a/modules/profile/manifests/cache/kafka/certificate.pp 
b/modules/profile/manifests/cache/kafka/certificate.pp
new file mode 100644
index 000..14505b9
--- /dev/null
+++ b/modules/profile/manifests/cache/kafka/certificate.pp
@@ -0,0 +1,53 @@
+# == Class profile::cache::kafka::certs
+# Installs certificates and keys for varnishkafka to produce to Kafka over TLS.
+# This expects that a 'varnishkafka' SSL/TLS key and certificate is created by 
Cergen and
+# signed by our PuppetCA, and available in the Puppet private secrets module.
+# == Parameters.
+# [*ssl_key_password*]
+#   The password to decrypt the TLS client certificate.  Default: undef
+#
+class profile::cache::kafka::certificate(
+$ssl_key_password  = 
hiera('profile::cache::kafka::certificate::ssl_key_password', undef),
+) {
+# TLS/SSL configuration
+$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
+$ssl_location = '/etc/varnishkafka/ssl'
+$ssl_location_private = '/etc/varnishkafka/ssl/private'
+
+$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
+$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
+
+$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
+$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
+$ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
+
+file { $ssl_location:
+ensure => 'directory',
+owner  => 'root',
+group  => 'root',
+mode   => '0555',
+}
+
+file { $ssl_location_private:
+ensure  => 'directory',
+owner   => 'root',
+group   => 'root',
+mode=> '0500',
+require => File[$ssl_location],
+}
+
+file { $ssl_key_location:
+content => secret($ssl_key_location_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0400',
+require => File[$ssl_location_private],
+}
+
+file { $ssl_certificate_location:
+content => secret($ssl_certificate_secrets_path),
+owner   => 'root',
+group   => 'root',
+mode=> '0444',
+}
+}
diff --git a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp 
b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
index c97f74b..ae5cbc8 100644
--- a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
@@ -19,8 +19,11 @@
 class profile::cache::kafka::webrequest::jumbo(
 $cache_cluster = hiera('cache::cluster'),
 $statsd_host   = hiera('statsd'),
-$ssl_key_password  = 
hiera('profile::cache::kafka::webrequest::jumbo::ssl_key_password', undef),
 ) {
+# Include this class to get key and certificate for varnishkafka
+# to produce to Kafka over SSL/TLS.
+require ::profile::cache::kafka::certificate
+
 $config = kafka_config('jumbo-eqiad')
 
 # Array of kafka brokers in jumbo-eqiad with SSL port 9093
@@ -53,48 +56,6 @@
 # have mutiple DCs depooled in DNS and ~8 servers in the remaining DC to
 # split traffic, we could peak at ~9000
 $peak_rps_estimate = 9000
-
-# TLS/SSL configuration
-$ssl_ca_location = '/etc/ssl/certs/Puppet_Internal_CA.pem'
-$ssl_location = '/etc/varnishkafka/ssl'
-$ssl_location_private = '/etc/varnishkafka/ssl/private'
-
-$ssl_key_location_secrets_path = 
'certificates/varnishkafka/varnishkafka.key.private.pem'
-$ssl_key_location = "${ssl_location_private}/varnishkafka.key.pem"
-
-$ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
-$ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
-$ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
-
-file { $ssl_location:
-ensure => 'directory',
-owner  => 'root',
-group  => 'root',
-mode   => '0555',
-}
-
-file { $ssl_location_private:
-ensure  => 'directory',
-owner   => 'root',
-group   => 'root',
-mode=> '0500',
-require => File[$ssl_location],
-}
-
-file { $ssl_key_location:
-content => secret($ssl_key_location_secrets_path),
-owner   => 'root',
-group   => 'root',
-mode

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Update cdh to fix typo in parameter name

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402896 )

Change subject: Update cdh to fix typo in parameter name
..


Update cdh to fix typo in parameter name

Change-Id: Iaa1c1f63ebcb77d6287242a1f11540ee9d96770a
---
M modules/cdh
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/cdh b/modules/cdh
index bd4624f..0f137db 16
--- a/modules/cdh
+++ b/modules/cdh
@@ -1 +1 @@
-Subproject commit bd4624f1b3292bfabdda4291f25f9523a14f7853
+Subproject commit 0f137dbf35996fd1a48a4984345f21398e869c4e

-- 
To view, visit https://gerrit.wikimedia.org/r/402896
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Iaa1c1f63ebcb77d6287242a1f11540ee9d96770a
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Update cdh to fix typo in parameter name

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402896 )

Change subject: Update cdh to fix typo in parameter name
..

Update cdh to fix typo in parameter name

Change-Id: Iaa1c1f63ebcb77d6287242a1f11540ee9d96770a
---
M modules/cdh
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/96/402896/1

diff --git a/modules/cdh b/modules/cdh
index bd4624f..0f137db 16
--- a/modules/cdh
+++ b/modules/cdh
@@ -1 +1 @@
-Subproject commit bd4624f1b3292bfabdda4291f25f9523a14f7853
+Subproject commit 0f137dbf35996fd1a48a4984345f21398e869c4e

-- 
To view, visit https://gerrit.wikimedia.org/r/402896
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Iaa1c1f63ebcb77d6287242a1f11540ee9d96770a
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...cdh[master]: Fix typo in parameter name

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402895 )

Change subject: Fix typo in parameter name
..

Fix typo in parameter name

Change-Id: I2c5283d1d588119b3c0abf0f3e613e7af56b4f7b
---
M manifests/hadoop.pp
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet/cdh 
refs/changes/95/402895/1

diff --git a/manifests/hadoop.pp b/manifests/hadoop.pp
index 15fa918..b6ca051 100644
--- a/manifests/hadoop.pp
+++ b/manifests/hadoop.pp
@@ -195,7 +195,7 @@
 $gelf_logging_host   = 
$::cdh::hadoop::defaults::gelf_logging_host,
 $gelf_logging_port   = 
$::cdh::hadoop::defaults::gelf_logging_port,
 $fair_scheduler_template = 
$::cdh::hadoop::defaults::fair_scheduler_template,
-$core_site_extra_properites  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
+$core_site_extra_properties  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
 $yarn_site_extra_properties  = 
$::cdh::hadoop::defaults::yarn_site_extra_properties,
 ) inherits cdh::hadoop::defaults
 {

-- 
To view, visit https://gerrit.wikimedia.org/r/402895
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I2c5283d1d588119b3c0abf0f3e613e7af56b4f7b
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet/cdh
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...cdh[master]: Fix typo in parameter name

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402895 )

Change subject: Fix typo in parameter name
..


Fix typo in parameter name

Change-Id: I2c5283d1d588119b3c0abf0f3e613e7af56b4f7b
---
M manifests/hadoop.pp
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/manifests/hadoop.pp b/manifests/hadoop.pp
index 15fa918..b6ca051 100644
--- a/manifests/hadoop.pp
+++ b/manifests/hadoop.pp
@@ -195,7 +195,7 @@
 $gelf_logging_host   = 
$::cdh::hadoop::defaults::gelf_logging_host,
 $gelf_logging_port   = 
$::cdh::hadoop::defaults::gelf_logging_port,
 $fair_scheduler_template = 
$::cdh::hadoop::defaults::fair_scheduler_template,
-$core_site_extra_properites  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
+$core_site_extra_properties  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
 $yarn_site_extra_properties  = 
$::cdh::hadoop::defaults::yarn_site_extra_properties,
 ) inherits cdh::hadoop::defaults
 {

-- 
To view, visit https://gerrit.wikimedia.org/r/402895
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I2c5283d1d588119b3c0abf0f3e613e7af56b4f7b
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet/cdh
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Allow superset to submit jobs to Hadoop as logged in users

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402425 )

Change subject: Allow superset to submit jobs to Hadoop as logged in users
..


Allow superset to submit jobs to Hadoop as logged in users

Change-Id: I431ff4b85300cdbe77666b0d5f2f94dd9417250e
---
M modules/cdh
M modules/profile/manifests/hadoop/common.pp
2 files changed, 8 insertions(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/cdh b/modules/cdh
index b8806c0..bd4624f 16
--- a/modules/cdh
+++ b/modules/cdh
@@ -1 +1 @@
-Subproject commit b8806c0fe7e1f8f07313a27ae5ce5ca8c8689e66
+Subproject commit bd4624f1b3292bfabdda4291f25f9523a14f7853
diff --git a/modules/profile/manifests/hadoop/common.pp 
b/modules/profile/manifests/hadoop/common.pp
index 71ede0d..eb03376 100644
--- a/modules/profile/manifests/hadoop/common.pp
+++ b/modules/profile/manifests/hadoop/common.pp
@@ -231,6 +231,13 @@
 # Yarn App Master possible port ranges
 yarn_app_mapreduce_am_job_client_port_range => '55000-55199',
 
+core_site_extra_properties  => {
+# Allow superset running as 'superset' user on thorium.eqiad.wmnet
+# to run jobs as users in the analytics-users and 
analytics-privatedata-users groups.
+'hadoop.proxyusers.superset.hosts'  => 'thorium.eqiad.wmnet',
+'hadoop.proxyusers.superset.groups' => 
'analytics-users,analytics-privatedata-users',
+},
+
 yarn_site_extra_properties  => {
 # Enable FairScheduler preemption. This will allow the essential 
queue
 # to preempt non-essential jobs.

-- 
To view, visit https://gerrit.wikimedia.org/r/402425
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I431ff4b85300cdbe77666b0d5f2f94dd9417250e
Gerrit-PatchSet: 3
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...cdh[master]: Fixes to better configure hadoop.proxyuser

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402424 )

Change subject: Fixes to better configure hadoop.proxyuser
..


Fixes to better configure hadoop.proxyuser

- remove unused and hardcoded llama impala user
- always configure hue and oozie proxyusers (no-op)
- conditionally render httpfs user (no-op)
- add core_site_extra_properties param to add other properties, including more 
proxyusers

This will be used to let superset proxy the logged in LDAP user
when running queries, so users can issue hive queries.

Change-Id: I0eede05bd221975a2fc4c7bcd7c5b8bbf5478fac
---
M manifests/hadoop.pp
M manifests/hadoop/defaults.pp
M templates/hadoop/core-site.xml.erb
3 files changed, 30 insertions(+), 26 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/manifests/hadoop.pp b/manifests/hadoop.pp
index 48b22aa..15fa918 100644
--- a/manifests/hadoop.pp
+++ b/manifests/hadoop.pp
@@ -124,6 +124,10 @@
 #   $fair_scheduler_template  - The fair-scheduler.xml queue 
configuration template.
 #   If you set this to false or 
undef, FairScheduler will
 #   be disabled.  Default: 
cdh/hadoop/fair-scheduler.xml.erb
+#
+#   $core_site_extra_properties   - Hash of extra property names 
to values that will be
+#   be rendered in 
core-site.xml.erb.  Default: undef
+#
 #   $yarn_site_extra_properties   - Hash of extra property names 
to values that will be
 #   be rendered in 
yarn-site.xml.erb.  Default: undef
 #
@@ -191,6 +195,7 @@
 $gelf_logging_host   = 
$::cdh::hadoop::defaults::gelf_logging_host,
 $gelf_logging_port   = 
$::cdh::hadoop::defaults::gelf_logging_port,
 $fair_scheduler_template = 
$::cdh::hadoop::defaults::fair_scheduler_template,
+$core_site_extra_properites  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
 $yarn_site_extra_properties  = 
$::cdh::hadoop::defaults::yarn_site_extra_properties,
 ) inherits cdh::hadoop::defaults
 {
diff --git a/manifests/hadoop/defaults.pp b/manifests/hadoop/defaults.pp
index 617d41e..e1807b5 100644
--- a/manifests/hadoop/defaults.pp
+++ b/manifests/hadoop/defaults.pp
@@ -60,6 +60,7 @@
 $yarn_log_aggregation_retain_check_interval_seconds = 86400
 
 $fair_scheduler_template = 
'cdh/hadoop/fair-scheduler.xml.erb'
+$core_site_extra_properties  = undef
 $yarn_site_extra_properties  = undef
 
 $hadoop_heapsize = undef
diff --git a/templates/hadoop/core-site.xml.erb 
b/templates/hadoop/core-site.xml.erb
index a8df776..2a75ce9 100644
--- a/templates/hadoop/core-site.xml.erb
+++ b/templates/hadoop/core-site.xml.erb
@@ -17,16 +17,25 @@
 ha.zookeeper.quorum
 <%= Array(@zookeeper_hosts).sort.join(',') %>
   
-<% end -%>
 
+<% end -%>
 <% if @io_file_buffer_size -%>
   
 io.file.buffer.size
 <%= @io_file_buffer_size %>
   
-<% end -%>
 
-<% if @webhdfs_enabled or @httpfs_enabled -%>
+<% end -%>
+  
+  
+hadoop.proxyuser.mapred.hosts
+*
+  
+  
+hadoop.proxyuser.mapred.groups
+*
+  
+
   
   
 hadoop.proxyuser.hue.hosts
@@ -46,9 +55,9 @@
 hadoop.proxyuser.oozie.groups
 *
   
-<% end -%>
 
 <% if @httpfs_enabled -%>
+  
   
 hadoop.proxyuser.httpfs.hosts
 *
@@ -57,34 +66,23 @@
 hadoop.proxyuser.httpfs.groups
 *
   
+
 <% end -%>
-
-  
-  
-hadoop.proxyuser.mapred.hosts
-*
-  
-  
-hadoop.proxyuser.mapred.groups
-*
-  
-
-  
-  
-hadoop.proxyuser.llama.hosts
-*
-  
-  
-hadoop.proxyuser.llama.groups
-*
-  
-
 <% if @net_topology_script_template -%>
   
   
   net.topology.script.file.name
   <%= @net_topology_script_path %>
   
-<% end -%>
 
+<% end -%>
+<% if @core_site_extra_properties -%>
+<% @core_site_extra_properties.sort.map do |key, value| -%>
+  
+  <%= key %>
+  <%= value %>
+  
+
+<% end -%>
+<% end -%>
 

-- 
To view, visit https://gerrit.wikimedia.org/r/402424
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I0eede05bd221975a2fc4c7bcd7c5b8bbf5478fac
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet/cdh
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Render role's analytics refinery logrotate from profile

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402857 )

Change subject: Render role's analytics refinery logrotate from profile
..


Render role's analytics refinery logrotate from profile

This is temporary while we migrate away from roles

Bug: T167790
Change-Id: I2a4cb51f84ed7bd169278cbb76c8e5f9ed8d450b
---
M modules/role/manifests/analytics_cluster/refinery.pp
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/role/manifests/analytics_cluster/refinery.pp 
b/modules/role/manifests/analytics_cluster/refinery.pp
index 63e590d..342daad 100644
--- a/modules/role/manifests/analytics_cluster/refinery.pp
+++ b/modules/role/manifests/analytics_cluster/refinery.pp
@@ -66,7 +66,7 @@
 }
 
 logrotate::conf { 'refinery':
-source  => 
'puppet:///modules/role/analytics_cluster/refinery-logrotate.conf',
+source  => 
'puppet:///modules/profile/analytics/refinery-logrotate.conf',
 require => File[$log_dir],
 }
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402857
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I2a4cb51f84ed7bd169278cbb76c8e5f9ed8d450b
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Render role's analytics refinery logrotate from profile

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402857 )

Change subject: Render role's analytics refinery logrotate from profile
..

Render role's analytics refinery logrotate from profile

This is temporary while we migrate away from roles

Bug: T167790
Change-Id: I2a4cb51f84ed7bd169278cbb76c8e5f9ed8d450b
---
M modules/role/manifests/analytics_cluster/refinery.pp
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/57/402857/1

diff --git a/modules/role/manifests/analytics_cluster/refinery.pp 
b/modules/role/manifests/analytics_cluster/refinery.pp
index 63e590d..342daad 100644
--- a/modules/role/manifests/analytics_cluster/refinery.pp
+++ b/modules/role/manifests/analytics_cluster/refinery.pp
@@ -66,7 +66,7 @@
 }
 
 logrotate::conf { 'refinery':
-source  => 
'puppet:///modules/role/analytics_cluster/refinery-logrotate.conf',
+source  => 
'puppet:///modules/profile/analytics/refinery-logrotate.conf',
 require => File[$log_dir],
 }
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402857
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I2a4cb51f84ed7bd169278cbb76c8e5f9ed8d450b
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Move refinery::job::data_check from stat1005 to analytics1003

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402853 )

Change subject: Move refinery::job::data_check from stat1005 to analytics1003
..


Move refinery::job::data_check from stat1005 to analytics1003

as part of profile refactor

Bug: T167790
Change-Id: I3cb5acf58d11845cb9431c2bf1fa3fd530bad102
---
M manifests/site.pp
M modules/role/manifests/analytics_cluster/coordinator.pp
2 files changed, 1 insertion(+), 5 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/manifests/site.pp b/manifests/site.pp
index ffd6b39..831f574 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -2045,10 +2045,6 @@
 # Include analytics/refinery deployment target.
 include ::role::analytics_cluster::refinery
 
-# Include analytics/refinery checks that send email about
-# webrequest partitions faultyness.
-include ::role::analytics_cluster::refinery::job::data_check
-
 # Set up a read only rsync module to allow access
 # to public data generated by the Analytics Cluster.
 include ::role::analytics_cluster::rsyncd
diff --git a/modules/role/manifests/analytics_cluster/coordinator.pp 
b/modules/role/manifests/analytics_cluster/coordinator.pp
index 507c99f..a96 100644
--- a/modules/role/manifests/analytics_cluster/coordinator.pp
+++ b/modules/role/manifests/analytics_cluster/coordinator.pp
@@ -53,7 +53,7 @@
 # Camus crons import data into
 # from Kafka into HDFS.
 include ::profile::analytics::refinery::job::camus
-
+include ::profile::analytics::refinery::job::data_check
 include ::profile::analytics::refinery::job::data_drop
 include ::profile::analytics::refinery::job::project_namespace_map
 include ::profile::analytics::refinery::job::sqoop_mediawiki

-- 
To view, visit https://gerrit.wikimedia.org/r/402853
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I3cb5acf58d11845cb9431c2bf1fa3fd530bad102
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Move refinery::job::data_check from stat1005 to analytics1003

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402853 )

Change subject: Move refinery::job::data_check from stat1005 to analytics1003
..

Move refinery::job::data_check from stat1005 to analytics1003

as part of profile refactor

Bug: T167790
Change-Id: I3cb5acf58d11845cb9431c2bf1fa3fd530bad102
---
M manifests/site.pp
M modules/role/manifests/analytics_cluster/coordinator.pp
2 files changed, 1 insertion(+), 5 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/53/402853/1

diff --git a/manifests/site.pp b/manifests/site.pp
index ffd6b39..831f574 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -2045,10 +2045,6 @@
 # Include analytics/refinery deployment target.
 include ::role::analytics_cluster::refinery
 
-# Include analytics/refinery checks that send email about
-# webrequest partitions faultyness.
-include ::role::analytics_cluster::refinery::job::data_check
-
 # Set up a read only rsync module to allow access
 # to public data generated by the Analytics Cluster.
 include ::role::analytics_cluster::rsyncd
diff --git a/modules/role/manifests/analytics_cluster/coordinator.pp 
b/modules/role/manifests/analytics_cluster/coordinator.pp
index 507c99f..a96 100644
--- a/modules/role/manifests/analytics_cluster/coordinator.pp
+++ b/modules/role/manifests/analytics_cluster/coordinator.pp
@@ -53,7 +53,7 @@
 # Camus crons import data into
 # from Kafka into HDFS.
 include ::profile::analytics::refinery::job::camus
-
+include ::profile::analytics::refinery::job::data_check
 include ::profile::analytics::refinery::job::data_drop
 include ::profile::analytics::refinery::job::project_namespace_map
 include ::profile::analytics::refinery::job::sqoop_mediawiki

-- 
To view, visit https://gerrit.wikimedia.org/r/402853
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I3cb5acf58d11845cb9431c2bf1fa3fd530bad102
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Parametersize kafka_cluster_name in refinery job camus

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402847 )

Change subject: Parametersize kafka_cluster_name in refinery job camus
..


Parametersize kafka_cluster_name in refinery job camus

Bug: T166248
Change-Id: I0be353c334117808ef585bf900c501de37378372
---
M modules/profile/manifests/analytics/refinery/job/camus.pp
1 file changed, 9 insertions(+), 2 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/analytics/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
index fa84606..c471806 100644
--- a/modules/profile/manifests/analytics/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -2,10 +2,17 @@
 # Uses camus::job to set up cron jobs to
 # import data from Kafka into Hadoop.
 #
-class profile::analytics::refinery::job::camus {
+# == Parameters
+# [*kafka_cluster_name*]
+#   Name of the Kafka cluster in the kafka_clusters hash that will be used
+#   to look up brokers from which Camus will import data.  Default: analytics
+#
+class profile::analytics::refinery::job::camus(
+$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 
'analytics')
+) {
 require ::profile::analytics::refinery
 
-$kafka_config = kafka_config('analytics')
+$kafka_config = kafka_config($kafka_cluster_name)
 
 # Make all uses of camus::job set default kafka_brokers and camus_jar.
 # If you build a new camus or refinery, and you want to use it, you'll

-- 
To view, visit https://gerrit.wikimedia.org/r/402847
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I0be353c334117808ef585bf900c501de37378372
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Parametersize kafka_cluster_name in refinery job camus

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402847 )

Change subject: Parametersize kafka_cluster_name in refinery job camus
..

Parametersize kafka_cluster_name in refinery job camus

Bug: T166248
Change-Id: I0be353c334117808ef585bf900c501de37378372
---
M modules/profile/manifests/analytics/refinery/job/camus.pp
1 file changed, 8 insertions(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/47/402847/1

diff --git a/modules/profile/manifests/analytics/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
index fa84606..cf8f165 100644
--- a/modules/profile/manifests/analytics/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -2,7 +2,14 @@
 # Uses camus::job to set up cron jobs to
 # import data from Kafka into Hadoop.
 #
-class profile::analytics::refinery::job::camus {
+# == Parameters
+# [*kafka_cluster_name*]
+#   Name of the Kafka cluster in the kafka_clusters hash that will be used
+#   to look up brokers from which Camus will import data.  Default: analytics
+#
+class profile::analytics::refinery::job::camus(
+$kafka_cluster_name = 
hiera('profile::analytics::refinery::job::camus::kafka_cluster_name', 
'analytics')
+) {
 require ::profile::analytics::refinery
 
 $kafka_config = kafka_config('analytics')

-- 
To view, visit https://gerrit.wikimedia.org/r/402847
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I0be353c334117808ef585bf900c501de37378372
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Move role refinery::job::* to profiles

2018-01-08 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402843 )

Change subject: Move role refinery::job::* to profiles
..


Move role refinery::job::* to profiles

Bug: T167790
Change-Id: I8003191d35e38ea418a6cdabf656ace4269ecd32
---
M manifests/site.pp
R modules/profile/files/analytics/refinery-logrotate.conf
M modules/profile/manifests/analytics/refinery.pp
R modules/profile/manifests/analytics/refinery/job/camus.pp
R modules/profile/manifests/analytics/refinery/job/data_check.pp
A modules/profile/manifests/analytics/refinery/job/data_drop.pp
A modules/profile/manifests/analytics/refinery/job/guard.pp
R modules/profile/manifests/analytics/refinery/job/json_refine.pp
R modules/profile/manifests/analytics/refinery/job/json_refine_job.pp
A modules/profile/manifests/analytics/refinery/job/project_namespace_map.pp
A modules/profile/manifests/analytics/refinery/job/sqoop_mediawiki.pp
A modules/profile/manifests/analytics/refinery/source.pp
M modules/role/manifests/analytics_cluster/coordinator.pp
D modules/role/manifests/analytics_cluster/refinery/job/data_drop.pp
D modules/role/manifests/analytics_cluster/refinery/job/guard.pp
D modules/role/manifests/analytics_cluster/refinery/job/project_namespace_map.pp
D modules/role/manifests/analytics_cluster/refinery/job/sqoop_mediawiki.pp
17 files changed, 244 insertions(+), 224 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/manifests/site.pp b/manifests/site.pp
index 81992cf..ffd6b39 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -2049,10 +2049,6 @@
 # webrequest partitions faultyness.
 include ::role::analytics_cluster::refinery::job::data_check
 
-# Include analytics/refinery/source guard checks
-# Disabled due to T166937
-# analytics_cluster::refinery::job::guard,
-
 # Set up a read only rsync module to allow access
 # to public data generated by the Analytics Cluster.
 include ::role::analytics_cluster::rsyncd
diff --git a/modules/role/files/analytics_cluster/refinery-logrotate.conf 
b/modules/profile/files/analytics/refinery-logrotate.conf
similarity index 100%
rename from modules/role/files/analytics_cluster/refinery-logrotate.conf
rename to modules/profile/files/analytics/refinery-logrotate.conf
diff --git a/modules/profile/manifests/analytics/refinery.pp 
b/modules/profile/manifests/analytics/refinery.pp
index cbf308c..38a05f6 100644
--- a/modules/profile/manifests/analytics/refinery.pp
+++ b/modules/profile/manifests/analytics/refinery.pp
@@ -67,7 +67,7 @@
 }
 
 logrotate::conf { 'refinery':
-source  => 
'puppet:///modules/role/analytics_cluster/refinery-logrotate.conf',
+source  => 
'puppet:///modules/profile/analytics/refinery-logrotate.conf',
 require => File[$log_dir],
 }
 }
diff --git a/modules/role/manifests/analytics_cluster/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
similarity index 67%
rename from modules/role/manifests/analytics_cluster/refinery/job/camus.pp
rename to modules/profile/manifests/analytics/refinery/job/camus.pp
index 7a139ad..fa84606 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -1,9 +1,9 @@
-# == Class role::analytics_cluster::refinery::job::camus
+# == Class profile::analytics::refinery::job::camus
 # Uses camus::job to set up cron jobs to
 # import data from Kafka into Hadoop.
 #
-class role::analytics_cluster::refinery::job::camus {
-require ::role::analytics_cluster::refinery
+class profile::analytics::refinery::job::camus {
+require ::profile::analytics::refinery
 
 $kafka_config = kafka_config('analytics')
 
@@ -13,10 +13,10 @@
 # for a particular camus::job instance by setting the parameter on
 # the camus::job declaration.
 Camus::Job {
-script=> "export 
PYTHONPATH=\${PYTHONPATH}:${role::analytics_cluster::refinery::path}/python && 
${role::analytics_cluster::refinery::path}/bin/camus",
+script=> "export 
PYTHONPATH=\${PYTHONPATH}:${profile::analytics::refinery::path}/python && 
${profile::analytics::refinery::path}/bin/camus",
 kafka_brokers => suffix($kafka_config['brokers']['array'], ':9092'),
-camus_jar => 
"${role::analytics_cluster::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
-check_jar => 
"${role::analytics_cluster::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
+camus_jar => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
+check_jar => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
 }
 
 # Import webrequest_* topics into 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Move role refinery::job::* to profiles

2018-01-08 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402843 )

Change subject: Move role refinery::job::* to profiles
..

Move role refinery::job::* to profiles

Bug: T167790
Change-Id: I8003191d35e38ea418a6cdabf656ace4269ecd32
---
R modules/profile/files/analytics/refinery-logrotate.conf
M modules/profile/manifests/analytics/refinery.pp
R modules/profile/manifests/analytics/refinery/job/camus.pp
R modules/profile/manifests/analytics/refinery/job/data_check.pp
A modules/profile/manifests/analytics/refinery/job/data_drop.pp
A modules/profile/manifests/analytics/refinery/job/guard.pp
R modules/profile/manifests/analytics/refinery/job/json_refine.pp
R modules/profile/manifests/analytics/refinery/job/json_refine_job.pp
A modules/profile/manifests/analytics/refinery/job/project_namespace_map.pp
A modules/profile/manifests/analytics/refinery/job/sqoop_mediawiki.pp
M modules/role/manifests/analytics_cluster/coordinator.pp
D modules/role/manifests/analytics_cluster/refinery/job/data_drop.pp
D modules/role/manifests/analytics_cluster/refinery/job/guard.pp
D modules/role/manifests/analytics_cluster/refinery/job/project_namespace_map.pp
D modules/role/manifests/analytics_cluster/refinery/job/sqoop_mediawiki.pp
15 files changed, 221 insertions(+), 220 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/43/402843/1

diff --git a/modules/role/files/analytics_cluster/refinery-logrotate.conf 
b/modules/profile/files/analytics/refinery-logrotate.conf
similarity index 100%
rename from modules/role/files/analytics_cluster/refinery-logrotate.conf
rename to modules/profile/files/analytics/refinery-logrotate.conf
diff --git a/modules/profile/manifests/analytics/refinery.pp 
b/modules/profile/manifests/analytics/refinery.pp
index cbf308c..38a05f6 100644
--- a/modules/profile/manifests/analytics/refinery.pp
+++ b/modules/profile/manifests/analytics/refinery.pp
@@ -67,7 +67,7 @@
 }
 
 logrotate::conf { 'refinery':
-source  => 
'puppet:///modules/role/analytics_cluster/refinery-logrotate.conf',
+source  => 
'puppet:///modules/profile/analytics/refinery-logrotate.conf',
 require => File[$log_dir],
 }
 }
diff --git a/modules/role/manifests/analytics_cluster/refinery/job/camus.pp 
b/modules/profile/manifests/analytics/refinery/job/camus.pp
similarity index 67%
rename from modules/role/manifests/analytics_cluster/refinery/job/camus.pp
rename to modules/profile/manifests/analytics/refinery/job/camus.pp
index 7a139ad..fa84606 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/camus.pp
+++ b/modules/profile/manifests/analytics/refinery/job/camus.pp
@@ -1,9 +1,9 @@
-# == Class role::analytics_cluster::refinery::job::camus
+# == Class profile::analytics::refinery::job::camus
 # Uses camus::job to set up cron jobs to
 # import data from Kafka into Hadoop.
 #
-class role::analytics_cluster::refinery::job::camus {
-require ::role::analytics_cluster::refinery
+class profile::analytics::refinery::job::camus {
+require ::profile::analytics::refinery
 
 $kafka_config = kafka_config('analytics')
 
@@ -13,10 +13,10 @@
 # for a particular camus::job instance by setting the parameter on
 # the camus::job declaration.
 Camus::Job {
-script=> "export 
PYTHONPATH=\${PYTHONPATH}:${role::analytics_cluster::refinery::path}/python && 
${role::analytics_cluster::refinery::path}/bin/camus",
+script=> "export 
PYTHONPATH=\${PYTHONPATH}:${profile::analytics::refinery::path}/python && 
${profile::analytics::refinery::path}/bin/camus",
 kafka_brokers => suffix($kafka_config['brokers']['array'], ':9092'),
-camus_jar => 
"${role::analytics_cluster::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
-check_jar => 
"${role::analytics_cluster::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
+camus_jar => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/camus-wmf/camus-wmf-0.1.0-wmf7.jar",
+check_jar => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.35.jar",
 }
 
 # Import webrequest_* topics into /wmf/data/raw/webrequest
@@ -45,7 +45,7 @@
 minute  => '15',
 # refinery-camus contains some custom decoder classes which
 # are needed to import Avro binary data.
-libjars => 
"${role::analytics_cluster::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.28.jar",
+libjars => 
"${profile::analytics::refinery::path}/artifacts/org/wikimedia/analytics/refinery/refinery-camus-0.0.28.jar",
 }
 
 # Import eventbus mediawiki.job queue topics into 
/wmf/data/raw/mediawiki_job
diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/data_check.pp 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Allow superset to submit jobs to Hadoop as logged in users

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402425 )

Change subject: Allow superset to submit jobs to Hadoop as logged in users
..

Allow superset to submit jobs to Hadoop as logged in users

Change-Id: I431ff4b85300cdbe77666b0d5f2f94dd9417250e
---
M modules/cdh
M modules/profile/manifests/hadoop/common.pp
2 files changed, 8 insertions(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/25/402425/1

diff --git a/modules/cdh b/modules/cdh
index b8806c0..bd4624f 16
--- a/modules/cdh
+++ b/modules/cdh
@@ -1 +1 @@
-Subproject commit b8806c0fe7e1f8f07313a27ae5ce5ca8c8689e66
+Subproject commit bd4624f1b3292bfabdda4291f25f9523a14f7853
diff --git a/modules/profile/manifests/hadoop/common.pp 
b/modules/profile/manifests/hadoop/common.pp
index 71ede0d..26f3f28 100644
--- a/modules/profile/manifests/hadoop/common.pp
+++ b/modules/profile/manifests/hadoop/common.pp
@@ -231,6 +231,13 @@
 # Yarn App Master possible port ranges
 yarn_app_mapreduce_am_job_client_port_range => '55000-55199',
 
+core_site_extra_properties  => {
+# Allow superset running as 'superset' user on thorium.eqiad.wmnet
+# to run jobs as users in the analytics-users and 
analytics-privatedata-users groups.
+'hadoop.proxyusers.superset.hosts' => 'thorium.eqiad.wmnet',
+'hadoop.proxyusers.superset.groups' => 
'analytics-users,analytics-privatedata-users',
+},
+
 yarn_site_extra_properties  => {
 # Enable FairScheduler preemption. This will allow the essential 
queue
 # to preempt non-essential jobs.

-- 
To view, visit https://gerrit.wikimedia.org/r/402425
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I431ff4b85300cdbe77666b0d5f2f94dd9417250e
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...cdh[master]: Fixes to better configure hadoop.proxyuser

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402424 )

Change subject: Fixes to better configure hadoop.proxyuser
..

Fixes to better configure hadoop.proxyuser

- remove unused and hardcoded llama impala user
- always configure hue and oozie proxyusers (no-op)
- conditionally render httpfs user (no-op)
- add core_site_extra_properties param to add other properties, including more 
proxyusers

This will be used to let superset proxy the logged in LDAP user
when running queries, so users can issue hive queries.

Change-Id: I0eede05bd221975a2fc4c7bcd7c5b8bbf5478fac
---
M manifests/hadoop.pp
M manifests/hadoop/defaults.pp
M templates/hadoop/core-site.xml.erb
3 files changed, 29 insertions(+), 26 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet/cdh 
refs/changes/24/402424/1

diff --git a/manifests/hadoop.pp b/manifests/hadoop.pp
index 48b22aa..15fa918 100644
--- a/manifests/hadoop.pp
+++ b/manifests/hadoop.pp
@@ -124,6 +124,10 @@
 #   $fair_scheduler_template  - The fair-scheduler.xml queue 
configuration template.
 #   If you set this to false or 
undef, FairScheduler will
 #   be disabled.  Default: 
cdh/hadoop/fair-scheduler.xml.erb
+#
+#   $core_site_extra_properties   - Hash of extra property names 
to values that will be
+#   be rendered in 
core-site.xml.erb.  Default: undef
+#
 #   $yarn_site_extra_properties   - Hash of extra property names 
to values that will be
 #   be rendered in 
yarn-site.xml.erb.  Default: undef
 #
@@ -191,6 +195,7 @@
 $gelf_logging_host   = 
$::cdh::hadoop::defaults::gelf_logging_host,
 $gelf_logging_port   = 
$::cdh::hadoop::defaults::gelf_logging_port,
 $fair_scheduler_template = 
$::cdh::hadoop::defaults::fair_scheduler_template,
+$core_site_extra_properites  = 
$::cdh::hadoop::defaults::core_site_extra_properties,
 $yarn_site_extra_properties  = 
$::cdh::hadoop::defaults::yarn_site_extra_properties,
 ) inherits cdh::hadoop::defaults
 {
diff --git a/manifests/hadoop/defaults.pp b/manifests/hadoop/defaults.pp
index 617d41e..e1807b5 100644
--- a/manifests/hadoop/defaults.pp
+++ b/manifests/hadoop/defaults.pp
@@ -60,6 +60,7 @@
 $yarn_log_aggregation_retain_check_interval_seconds = 86400
 
 $fair_scheduler_template = 
'cdh/hadoop/fair-scheduler.xml.erb'
+$core_site_extra_properties  = undef
 $yarn_site_extra_properties  = undef
 
 $hadoop_heapsize = undef
diff --git a/templates/hadoop/core-site.xml.erb 
b/templates/hadoop/core-site.xml.erb
index a8df776..a4b777e 100644
--- a/templates/hadoop/core-site.xml.erb
+++ b/templates/hadoop/core-site.xml.erb
@@ -17,16 +17,25 @@
 ha.zookeeper.quorum
 <%= Array(@zookeeper_hosts).sort.join(',') %>
   
-<% end -%>
 
+<% end -%>
 <% if @io_file_buffer_size -%>
   
 io.file.buffer.size
 <%= @io_file_buffer_size %>
   
-<% end -%>
 
-<% if @webhdfs_enabled or @httpfs_enabled -%>
+<% end -%>
+  
+  
+hadoop.proxyuser.mapred.hosts
+*
+  
+  
+hadoop.proxyuser.mapred.groups
+*
+  
+
   
   
 hadoop.proxyuser.hue.hosts
@@ -46,9 +55,9 @@
 hadoop.proxyuser.oozie.groups
 *
   
-<% end -%>
 
 <% if @httpfs_enabled -%>
+  
   
 hadoop.proxyuser.httpfs.hosts
 *
@@ -57,34 +66,22 @@
 hadoop.proxyuser.httpfs.groups
 *
   
+
 <% end -%>
-
-  
-  
-hadoop.proxyuser.mapred.hosts
-*
-  
-  
-hadoop.proxyuser.mapred.groups
-*
-  
-
-  
-  
-hadoop.proxyuser.llama.hosts
-*
-  
-  
-hadoop.proxyuser.llama.groups
-*
-  
-
 <% if @net_topology_script_template -%>
   
   
   net.topology.script.file.name
   <%= @net_topology_script_path %>
   
-<% end -%>
 
+<% end -%>
+<% if @core_site_extra_properties -%>
+<% @core_site_extra_properties.sort.map do |key, value| -%>
+  
+  <%= key %>
+  <%= value %>
+  
+
+<% end -%>
 

-- 
To view, visit https://gerrit.wikimedia.org/r/402424
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I0eede05bd221975a2fc4c7bcd7c5b8bbf5478fac
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet/cdh
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use shell username instead of ldap CN to authenticate with s...

2018-01-05 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402421 )

Change subject: Use shell username instead of ldap CN to authenticate with 
superset
..


Use shell username instead of ldap CN to authenticate with superset

Change-Id: If3baefc76103609dc0dcd3767352918160b284a1
---
M modules/superset/templates/superset.wikimedia.org.erb
1 file changed, 2 insertions(+), 2 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/superset/templates/superset.wikimedia.org.erb 
b/modules/superset/templates/superset.wikimedia.org.erb
index 8d82904..40e23ab 100644
--- a/modules/superset/templates/superset.wikimedia.org.erb
+++ b/modules/superset/templates/superset.wikimedia.org.erb
@@ -19,12 +19,12 @@
 
 
 
-AuthName "WMF Labs (use wiki login name not shell)"
+AuthName "WMF LDAP (use shell username, not Wikitech name)"
 AuthType Basic
 AuthBasicProvider ldap
 AuthLDAPBindDN cn=proxyagent,ou=profile,dc=wikimedia,dc=org
 AuthLDAPBindPassword <%= @proxypass %>
-AuthLDAPURL "ldaps://ldap-labs.eqiad.wikimedia.org 
ldap-labs.codfw.wikimedia.org/ou=people,dc=wikimedia,dc=org?cn"
+AuthLDAPURL "ldaps://ldap-labs.eqiad.wikimedia.org 
ldap-labs.codfw.wikimedia.org/ou=people,dc=wikimedia,dc=org?uid"
 Require ldap-group cn=wmf,ou=groups,dc=wikimedia,dc=org
 Require ldap-group cn=nda,ou=groups,dc=wikimedia,dc=org
 # Set a header so that superset/flask can authenticate the user.

-- 
To view, visit https://gerrit.wikimedia.org/r/402421
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: If3baefc76103609dc0dcd3767352918160b284a1
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use shell username instead of ldap CN to authenticate with s...

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402421 )

Change subject: Use shell username instead of ldap CN to authenticate with 
superset
..

Use shell username instead of ldap CN to authenticate with superset

Change-Id: If3baefc76103609dc0dcd3767352918160b284a1
---
M modules/superset/templates/superset.wikimedia.org.erb
1 file changed, 2 insertions(+), 2 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/21/402421/1

diff --git a/modules/superset/templates/superset.wikimedia.org.erb 
b/modules/superset/templates/superset.wikimedia.org.erb
index 8d82904..40e23ab 100644
--- a/modules/superset/templates/superset.wikimedia.org.erb
+++ b/modules/superset/templates/superset.wikimedia.org.erb
@@ -19,12 +19,12 @@
 
 
 
-AuthName "WMF Labs (use wiki login name not shell)"
+AuthName "WMF LDAP (use shell username, not Wikitech name)"
 AuthType Basic
 AuthBasicProvider ldap
 AuthLDAPBindDN cn=proxyagent,ou=profile,dc=wikimedia,dc=org
 AuthLDAPBindPassword <%= @proxypass %>
-AuthLDAPURL "ldaps://ldap-labs.eqiad.wikimedia.org 
ldap-labs.codfw.wikimedia.org/ou=people,dc=wikimedia,dc=org?cn"
+AuthLDAPURL "ldaps://ldap-labs.eqiad.wikimedia.org 
ldap-labs.codfw.wikimedia.org/ou=people,dc=wikimedia,dc=org?uid"
 Require ldap-group cn=wmf,ou=groups,dc=wikimedia,dc=org
 Require ldap-group cn=nda,ou=groups,dc=wikimedia,dc=org
 # Set a header so that superset/flask can authenticate the user.

-- 
To view, visit https://gerrit.wikimedia.org/r/402421
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: If3baefc76103609dc0dcd3767352918160b284a1
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use python3 async gthread workers for superset

2018-01-05 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402411 )

Change subject: Use python3 async gthread workers for superset
..


Use python3 async gthread workers for superset

Bug: T182688
Change-Id: Id2dcef636e107807cc3e3e171689bbaec2fad0e2
---
M modules/profile/manifests/superset.pp
M modules/superset/manifests/init.pp
2 files changed, 11 insertions(+), 9 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/profile/manifests/superset.pp 
b/modules/profile/manifests/superset.pp
index 4878c7a..5319b60 100644
--- a/modules/profile/manifests/superset.pp
+++ b/modules/profile/manifests/superset.pp
@@ -36,14 +36,14 @@
 #   statsd host:port
 #
 class profile::superset(
-$workers   = hiera('profile::superset::workers', 1),
-$database_uri  = hiera('profile::superset::database_uri', 
'sqlite:var/lib/superset/superset.db'),
-$database_password = hiera('profile::superset::database_password', undef),
-$admin_user= hiera('profile::superset::admin_user', 'admin'),
-$admin_password= hiera('profile::superset::admin_password', 'admin'),
-$secret_key= hiera('profile::superset::secret_key', 
'not_really_a_secret_key'),
+$workers= hiera('profile::superset::workers', 1),
+$database_uri   = hiera('profile::superset::database_uri', 
'sqlite:var/lib/superset/superset.db'),
+$database_password  = hiera('profile::superset::database_password', undef),
+$admin_user = hiera('profile::superset::admin_user', 'admin'),
+$admin_password = hiera('profile::superset::admin_password', 'admin'),
+$secret_key = hiera('profile::superset::secret_key', 
'not_really_a_secret_key'),
 $ldap_proxy_enabled = hiera('profile::superset::ldap_proxy_enabled', 
false),
-$statsd= hiera('statsd', undef),
+$statsd = hiera('statsd', undef),
 ) {
 # If given $database_password, insert it into $database_uri.
 $full_database_uri = $database_password ? {
@@ -88,6 +88,8 @@
 
 class { '::superset':
 workers  => $workers,
+# gthread requires python3.
+worker_class => 'gthread',
 database_uri => $full_database_uri,
 secret_key   => $secret_key,
 admin_user   => $admin_user,
diff --git a/modules/superset/manifests/init.pp 
b/modules/superset/manifests/init.pp
index 8278b12..d4cc64f 100644
--- a/modules/superset/manifests/init.pp
+++ b/modules/superset/manifests/init.pp
@@ -27,7 +27,7 @@
 #   Number of gevent workers
 #
 # [*worker_class*]
-#   Gunicorn worker-class. sync or gevent.  Default: sync
+#   Gunicorn worker-class.  Default: sync
 #
 # [*admin_user*]
 #   Web UI admin user
@@ -72,7 +72,7 @@
 ) {
 requires_os('debian >= jessie')
 require_package(
-'python',
+'python3',
 'virtualenv',
 'firejail',
 )

-- 
To view, visit https://gerrit.wikimedia.org/r/402411
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Id2dcef636e107807cc3e3e171689bbaec2fad0e2
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use python3 async gthread workers for superset

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402411 )

Change subject: Use python3 async gthread workers for superset
..

Use python3 async gthread workers for superset

Bug: T182688
Change-Id: Id2dcef636e107807cc3e3e171689bbaec2fad0e2
---
M modules/profile/manifests/superset.pp
M modules/superset/manifests/init.pp
2 files changed, 11 insertions(+), 9 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/11/402411/1

diff --git a/modules/profile/manifests/superset.pp 
b/modules/profile/manifests/superset.pp
index 4878c7a..5319b60 100644
--- a/modules/profile/manifests/superset.pp
+++ b/modules/profile/manifests/superset.pp
@@ -36,14 +36,14 @@
 #   statsd host:port
 #
 class profile::superset(
-$workers   = hiera('profile::superset::workers', 1),
-$database_uri  = hiera('profile::superset::database_uri', 
'sqlite:var/lib/superset/superset.db'),
-$database_password = hiera('profile::superset::database_password', undef),
-$admin_user= hiera('profile::superset::admin_user', 'admin'),
-$admin_password= hiera('profile::superset::admin_password', 'admin'),
-$secret_key= hiera('profile::superset::secret_key', 
'not_really_a_secret_key'),
+$workers= hiera('profile::superset::workers', 1),
+$database_uri   = hiera('profile::superset::database_uri', 
'sqlite:var/lib/superset/superset.db'),
+$database_password  = hiera('profile::superset::database_password', undef),
+$admin_user = hiera('profile::superset::admin_user', 'admin'),
+$admin_password = hiera('profile::superset::admin_password', 'admin'),
+$secret_key = hiera('profile::superset::secret_key', 
'not_really_a_secret_key'),
 $ldap_proxy_enabled = hiera('profile::superset::ldap_proxy_enabled', 
false),
-$statsd= hiera('statsd', undef),
+$statsd = hiera('statsd', undef),
 ) {
 # If given $database_password, insert it into $database_uri.
 $full_database_uri = $database_password ? {
@@ -88,6 +88,8 @@
 
 class { '::superset':
 workers  => $workers,
+# gthread requires python3.
+worker_class => 'gthread',
 database_uri => $full_database_uri,
 secret_key   => $secret_key,
 admin_user   => $admin_user,
diff --git a/modules/superset/manifests/init.pp 
b/modules/superset/manifests/init.pp
index 8278b12..d4cc64f 100644
--- a/modules/superset/manifests/init.pp
+++ b/modules/superset/manifests/init.pp
@@ -27,7 +27,7 @@
 #   Number of gevent workers
 #
 # [*worker_class*]
-#   Gunicorn worker-class. sync or gevent.  Default: sync
+#   Gunicorn worker-class.  Default: sync
 #
 # [*admin_user*]
 #   Web UI admin user
@@ -72,7 +72,7 @@
 ) {
 requires_os('debian >= jessie')
 require_package(
-'python',
+'python3',
 'virtualenv',
 'firejail',
 )

-- 
To view, visit https://gerrit.wikimedia.org/r/402411
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Id2dcef636e107807cc3e3e171689bbaec2fad0e2
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] analytics...deploy[master]: Update build_wheels.sh to python3; update build artifacts fo...

2018-01-05 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402397 )

Change subject: Update build_wheels.sh to python3; update build artifacts for 
python3
..


Update build_wheels.sh to python3; update build artifacts for python3

create_virtualenv.sh shouldn't use --system-site-packages, as
installed numpy can conflict with superset's panda's numpy.

Bug: T182688

Change-Id: Ifbcd0e8aa2959f78a9c1c2daa0cda623a3c18663
---
R artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
R artifacts/jessie/Flask_Babel-0.11.1-py3-none-any.whl
R artifacts/jessie/Flask_Cache-0.13.1-py3-none-any.whl
R artifacts/jessie/Flask_Login-0.2.11-py3-none-any.whl
R artifacts/jessie/Flask_Migrate-2.0.3-py3-none-any.whl
D artifacts/jessie/Flask_OpenID-1.2.5-py2-none-any.whl
A artifacts/jessie/Flask_OpenID-1.2.5-py3-none-any.whl
R artifacts/jessie/Flask_SQLAlchemy-2.1-py3-none-any.whl
R artifacts/jessie/Flask_Script-2.0.5-py3-none-any.whl
R artifacts/jessie/Flask_Testing-0.6.2-py3-none-any.whl
R artifacts/jessie/Mako-1.0.7-py3-none-any.whl
D artifacts/jessie/Markdown-2.6.8-py2-none-any.whl
A artifacts/jessie/Markdown-2.6.8-py3-none-any.whl
D artifacts/jessie/MarkupSafe-1.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/MarkupSafe-1.0-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/PyHive-0.5.0-py3-none-any.whl
R artifacts/jessie/SQLAlchemy-1.1.9-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/SQLAlchemy_Utils-0.32.16-py3-none-any.whl
M artifacts/jessie/WTForms-2.1-py2.py3-none-any.whl
D artifacts/jessie/Werkzeug-0.12.2-py2.py3-none-any.whl
A artifacts/jessie/Werkzeug-0.14.1-py2.py3-none-any.whl
M artifacts/jessie/alembic-0.9.6-py2.py3-none-any.whl
D artifacts/jessie/anyjson-0.3.3-py2-none-any.whl
A artifacts/jessie/anyjson-0.3.3-py3-none-any.whl
D artifacts/jessie/asn1crypto-0.23.0-py2.py3-none-any.whl
A artifacts/jessie/asn1crypto-0.24.0-py2.py3-none-any.whl
D artifacts/jessie/backports.ssl_match_hostname-3.5.0.1-py2-none-any.whl
D artifacts/jessie/billiard-3.3.0.23-cp27-none-linux_x86_64.whl
A artifacts/jessie/billiard-3.3.0.23-py3-none-any.whl
D artifacts/jessie/cffi-1.11.2-cp27-none-linux_x86_64.whl
A artifacts/jessie/cffi-1.11.2-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/cryptography-1.9-cp27-none-linux_x86_64.whl
A artifacts/jessie/cryptography-1.9-cp34-cp34m-linux_x86_64.whl
A artifacts/jessie/defusedxml-0.5.0-py2.py3-none-any.whl
D artifacts/jessie/docutils-0.14-py2-none-any.whl
A artifacts/jessie/docutils-0.14-py3-none-any.whl
D artifacts/jessie/enum34-1.1.6-py2-none-any.whl
M artifacts/jessie/flower-0.9.1-py2.py3-none-any.whl
R artifacts/jessie/future-0.16.0-py3-none-any.whl
D artifacts/jessie/futures-3.2.0-py2-none-any.whl
R artifacts/jessie/humanize-0.5.1-py3-none-any.whl
D artifacts/jessie/ipaddress-1.0.18-py2-none-any.whl
R artifacts/jessie/itsdangerous-0.24-py3-none-any.whl
D artifacts/jessie/mysqlclient-1.3.12-cp27-none-linux_x86_64.whl
A artifacts/jessie/mysqlclient-1.3.12-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/numpy-1.13.3-cp27-none-linux_x86_64.whl
A artifacts/jessie/numpy-1.13.3-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/pandas-0.20.3-cp27-none-linux_x86_64.whl
A artifacts/jessie/pandas-0.20.3-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/parsedatetime-2.0-py3-none-any.whl
M artifacts/jessie/pycparser-2.18-py2.py3-none-any.whl
A artifacts/jessie/python3_openid-3.1.0-py3-none-any.whl
A artifacts/jessie/python_dateutil-2.6.1-py2.py3-none-any.whl
D artifacts/jessie/python_editor-1.0.3-py2-none-any.whl
A artifacts/jessie/python_editor-1.0.3-py3-none-any.whl
D artifacts/jessie/python_openid-2.2.5-py2-none-any.whl
D artifacts/jessie/sasl-0.2.1-cp27-none-linux_x86_64.whl
A artifacts/jessie/sasl-0.2.1-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/simplejson-3.10.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/simplejson-3.10.0-cp34-cp34m-linux_x86_64.whl
A artifacts/jessie/six-1.11.0-py2.py3-none-any.whl
R artifacts/jessie/superset-0.20.6-py3-none-any.whl
D artifacts/jessie/thrift-0.10.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/thrift-0.10.0-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/thrift_sasl-0.3.0-py2-none-any.whl
A artifacts/jessie/thrift_sasl-0.3.0-py3-none-any.whl
R artifacts/jessie/tornado-4.2-cp34-cp34m-linux_x86_64.whl
M build_wheels.sh
M create_virtualenv.sh
69 files changed, 33 insertions(+), 12 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl 
b/artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
similarity index 97%
rename from artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl
rename to artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
index 446446d..7f8898c 100644
--- a/artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl
+++ b/artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
Binary files differ
diff --git 

[MediaWiki-commits] [Gerrit] analytics...deploy[master]: Update build_wheels.sh to python3; update build artifacts fo...

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402397 )

Change subject: Update build_wheels.sh to python3; update build artifacts for 
python3
..

Update build_wheels.sh to python3; update build artifacts for python3

Bug: T182688

Change-Id: Ifbcd0e8aa2959f78a9c1c2daa0cda623a3c18663
---
R artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
R artifacts/jessie/Flask_Babel-0.11.1-py3-none-any.whl
R artifacts/jessie/Flask_Cache-0.13.1-py3-none-any.whl
R artifacts/jessie/Flask_Login-0.2.11-py3-none-any.whl
R artifacts/jessie/Flask_Migrate-2.0.3-py3-none-any.whl
D artifacts/jessie/Flask_OpenID-1.2.5-py2-none-any.whl
A artifacts/jessie/Flask_OpenID-1.2.5-py3-none-any.whl
R artifacts/jessie/Flask_SQLAlchemy-2.1-py3-none-any.whl
R artifacts/jessie/Flask_Script-2.0.5-py3-none-any.whl
R artifacts/jessie/Flask_Testing-0.6.2-py3-none-any.whl
R artifacts/jessie/Mako-1.0.7-py3-none-any.whl
D artifacts/jessie/Markdown-2.6.8-py2-none-any.whl
A artifacts/jessie/Markdown-2.6.8-py3-none-any.whl
D artifacts/jessie/MarkupSafe-1.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/MarkupSafe-1.0-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/PyHive-0.5.0-py3-none-any.whl
R artifacts/jessie/SQLAlchemy-1.1.9-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/SQLAlchemy_Utils-0.32.16-py3-none-any.whl
M artifacts/jessie/WTForms-2.1-py2.py3-none-any.whl
D artifacts/jessie/Werkzeug-0.12.2-py2.py3-none-any.whl
A artifacts/jessie/Werkzeug-0.14.1-py2.py3-none-any.whl
M artifacts/jessie/alembic-0.9.6-py2.py3-none-any.whl
D artifacts/jessie/anyjson-0.3.3-py2-none-any.whl
A artifacts/jessie/anyjson-0.3.3-py3-none-any.whl
D artifacts/jessie/asn1crypto-0.23.0-py2.py3-none-any.whl
A artifacts/jessie/asn1crypto-0.24.0-py2.py3-none-any.whl
D artifacts/jessie/backports.ssl_match_hostname-3.5.0.1-py2-none-any.whl
D artifacts/jessie/billiard-3.3.0.23-cp27-none-linux_x86_64.whl
A artifacts/jessie/billiard-3.3.0.23-py3-none-any.whl
D artifacts/jessie/cffi-1.11.2-cp27-none-linux_x86_64.whl
A artifacts/jessie/cffi-1.11.2-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/cryptography-1.9-cp27-none-linux_x86_64.whl
A artifacts/jessie/cryptography-1.9-cp34-cp34m-linux_x86_64.whl
A artifacts/jessie/defusedxml-0.5.0-py2.py3-none-any.whl
D artifacts/jessie/docutils-0.14-py2-none-any.whl
A artifacts/jessie/docutils-0.14-py3-none-any.whl
D artifacts/jessie/enum34-1.1.6-py2-none-any.whl
M artifacts/jessie/flower-0.9.1-py2.py3-none-any.whl
R artifacts/jessie/future-0.16.0-py3-none-any.whl
D artifacts/jessie/futures-3.2.0-py2-none-any.whl
R artifacts/jessie/humanize-0.5.1-py3-none-any.whl
D artifacts/jessie/ipaddress-1.0.18-py2-none-any.whl
R artifacts/jessie/itsdangerous-0.24-py3-none-any.whl
D artifacts/jessie/mysqlclient-1.3.12-cp27-none-linux_x86_64.whl
A artifacts/jessie/mysqlclient-1.3.12-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/numpy-1.13.3-cp27-none-linux_x86_64.whl
A artifacts/jessie/numpy-1.13.3-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/pandas-0.20.3-cp27-none-linux_x86_64.whl
A artifacts/jessie/pandas-0.20.3-cp34-cp34m-linux_x86_64.whl
R artifacts/jessie/parsedatetime-2.0-py3-none-any.whl
M artifacts/jessie/pycparser-2.18-py2.py3-none-any.whl
A artifacts/jessie/python3_openid-3.1.0-py3-none-any.whl
A artifacts/jessie/python_dateutil-2.6.1-py2.py3-none-any.whl
D artifacts/jessie/python_editor-1.0.3-py2-none-any.whl
A artifacts/jessie/python_editor-1.0.3-py3-none-any.whl
D artifacts/jessie/python_openid-2.2.5-py2-none-any.whl
D artifacts/jessie/sasl-0.2.1-cp27-none-linux_x86_64.whl
A artifacts/jessie/sasl-0.2.1-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/simplejson-3.10.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/simplejson-3.10.0-cp34-cp34m-linux_x86_64.whl
A artifacts/jessie/six-1.11.0-py2.py3-none-any.whl
R artifacts/jessie/superset-0.20.6-py3-none-any.whl
D artifacts/jessie/thrift-0.10.0-cp27-none-linux_x86_64.whl
A artifacts/jessie/thrift-0.10.0-cp34-cp34m-linux_x86_64.whl
D artifacts/jessie/thrift_sasl-0.3.0-py2-none-any.whl
A artifacts/jessie/thrift_sasl-0.3.0-py3-none-any.whl
R artifacts/jessie/tornado-4.2-cp34-cp34m-linux_x86_64.whl
M build_wheels.sh
M create_virtualenv.sh
69 files changed, 32 insertions(+), 10 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/analytics/superset/deploy 
refs/changes/97/402397/1

diff --git a/artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl 
b/artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
similarity index 97%
rename from artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl
rename to artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
index 446446d..7f8898c 100644
--- a/artifacts/jessie/Flask_AppBuilder-1.9.4-py2-none-any.whl
+++ b/artifacts/jessie/Flask_AppBuilder-1.9.4-py3-none-any.whl
Binary files differ
diff --git a/artifacts/jessie/Flask_Babel-0.11.1-py2-none-any.whl 
b/artifacts/jessie/Flask_Babel-0.11.1-py3-none-any.whl
similarity index 64%
rename from 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Create profile::hadoop::apt_pin to ensure zookeeper is the c...

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402370 )

Change subject: Create profile::hadoop::apt_pin to ensure zookeeper is the 
correct version
..

Create profile::hadoop::apt_pin to ensure zookeeper is the correct version

Change-Id: Ia5c1a15cc17cadc79272678491a6ed3c502053e2
---
A modules/profile/manifests/hadoop/apt_pin.pp
M modules/profile/manifests/hadoop/master.pp
M modules/profile/manifests/hadoop/master/standby.pp
M modules/profile/manifests/hadoop/worker.pp
4 files changed, 21 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/70/402370/1

diff --git a/modules/profile/manifests/hadoop/apt_pin.pp 
b/modules/profile/manifests/hadoop/apt_pin.pp
new file mode 100644
index 000..15fa6f8
--- /dev/null
+++ b/modules/profile/manifests/hadoop/apt_pin.pp
@@ -0,0 +1,15 @@
+# == Class profile::hadoop::apt_pin
+# Pins thirdparty/cloudera packages in our apt repo
+# to a higher priority than others.  This mainly exists
+# because both Debian and CDH have versions of zookeeper
+# that conflict.  Where this class is included, the
+# CDH version of zookeeper (and any other conflicting packages)
+# will be prefered.
+#
+class profile::hadoop::apt_pin {
+apt::pin { 'cloudera':
+pin  => 'release c=thirdparty/cloudera',
+priority => '1001',
+before   => Class['cdh::hadoop'],
+}
+}
diff --git a/modules/profile/manifests/hadoop/master.pp 
b/modules/profile/manifests/hadoop/master.pp
index b846130..a5ea6dd 100644
--- a/modules/profile/manifests/hadoop/master.pp
+++ b/modules/profile/manifests/hadoop/master.pp
@@ -16,7 +16,8 @@
 $hadoop_user_groups   = 
hiera('profile::hadoop::master::hadoop_user_groups'),
 $statsd   = hiera('statsd'),
 ){
-
+# Hadoop masters need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::hadoop::apt_pin
 include ::profile::hadoop::common
 
 class { '::cdh::hadoop::master': }
diff --git a/modules/profile/manifests/hadoop/master/standby.pp 
b/modules/profile/manifests/hadoop/master/standby.pp
index ddbf1bb..a1583a3 100644
--- a/modules/profile/manifests/hadoop/master/standby.pp
+++ b/modules/profile/manifests/hadoop/master/standby.pp
@@ -13,7 +13,8 @@
 $hadoop_namenode_heapsize = 
hiera('profile::hadoop::standby::namenode_heapsize', 2048),
 $statsd   = hiera('statsd'),
 ) {
-
+# Hadoop masters need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::hadoop::apt_pin
 include ::profile::hadoop::common
 
 # Ensure that druid user exists on standby namenodes nodes.
diff --git a/modules/profile/manifests/hadoop/worker.pp 
b/modules/profile/manifests/hadoop/worker.pp
index 28b89db..c8161e1 100644
--- a/modules/profile/manifests/hadoop/worker.pp
+++ b/modules/profile/manifests/hadoop/worker.pp
@@ -12,7 +12,8 @@
 $ferm_srange= hiera('profile::hadoop::worker::ferm_srange', 
'$DOMAIN_NETWORKS'),
 $statsd = hiera('statsd'),
 ) {
-
+# Hadoop workers need Zookeeper package from CDH, pin CDH over Debian.
+include ::profile::hadoop::apt_pin
 include ::profile::hadoop::common
 
 # hive::client is nice to have for jobs launched

-- 
To view, visit https://gerrit.wikimedia.org/r/402370
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia5c1a15cc17cadc79272678491a6ed3c502053e2
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...cdh[master]: Create cdh::zookeeper class and specify version

2018-01-05 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402363 )

Change subject: Create cdh::zookeeper class and specify version
..

Create cdh::zookeeper class and specify version

CDH zookeeper version can conflict with Debian zookeeper version.
CDH should explicitly declare the version it needs.

Change-Id: I08433749ba8a9cb42b23735ac056e4f342bc276c
---
M manifests/hadoop/namenode.pp
M manifests/hadoop/nodemanager.pp
A manifests/zookeeper.pp
3 files changed, 20 insertions(+), 10 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet/cdh 
refs/changes/63/402363/1

diff --git a/manifests/hadoop/namenode.pp b/manifests/hadoop/namenode.pp
index 2a51125..3976263 100644
--- a/manifests/hadoop/namenode.pp
+++ b/manifests/hadoop/namenode.pp
@@ -17,11 +17,7 @@
 }
 
 if ($::cdh::hadoop::ha_enabled and $::cdh::hadoop::zookeeper_hosts) {
-if !defined(Package['zookeeper']) {
-package { 'zookeeper':
-ensure => 'installed'
-}
-}
+require ::cdh::zookeeper
 
 package { 'hadoop-hdfs-zkfc':
 ensure => 'installed',
diff --git a/manifests/hadoop/nodemanager.pp b/manifests/hadoop/nodemanager.pp
index fc2d396..c459e45 100644
--- a/manifests/hadoop/nodemanager.pp
+++ b/manifests/hadoop/nodemanager.pp
@@ -23,11 +23,7 @@
 # zookeeper package here explicitly.  This avoids
 # java.lang.NoClassDefFoundError: org/apache/zookeeper/KeeperException
 # errors.
-if !defined(Package['zookeeper']) {
-package { 'zookeeper':
-ensure => 'installed'
-}
-}
+require ::cdh::zookeeper
 
 # NodeManager (YARN TaskTracker)
 service { 'hadoop-yarn-nodemanager':
diff --git a/manifests/zookeeper.pp b/manifests/zookeeper.pp
new file mode 100644
index 000..8e46fcd
--- /dev/null
+++ b/manifests/zookeeper.pp
@@ -0,0 +1,18 @@
+# == Class cdh::zookeeper
+# Installs the CDH zookeeper library package.
+# This does not install a zookeeper server.  This class only
+# exists so we can be sure that the CDH version of zookeeper is installed,
+# and not the Debian version.
+#
+# == Parameters
+# [*ensure*]
+#   Ensure this version of zookeeper is installed.
+#   Default: '3.4.5+cdh5.10.0+104-1.cdh5.10.0.p0.71~jessie-cdh5.10.0'
+#
+class cdh::zookeeper(
+$ensure => '3.4.5+cdh5.10.0+104-1.cdh5.10.0.p0.71~jessie-cdh5.10.0'
+) {
+package { 'zookeeper':
+ensure => $ensure,
+}
+}

-- 
To view, visit https://gerrit.wikimedia.org/r/402363
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I08433749ba8a9cb42b23735ac056e4f342bc276c
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet/cdh
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use intermediate script for json refine jobs

2018-01-04 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402072 )

Change subject: Use intermediate script for json refine jobs
..


Use intermediate script for json refine jobs

JsonRefine commands can be too long for crontab if table blacklist
or whitelist is very long.  This renders a script into /usr/local/bin
that will be used in the crontab.

Change-Id: I9dd99efa15a24185d69277c7fb1674e1a1b2594d
---
M modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
M modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
2 files changed, 15 insertions(+), 4 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp 
b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
index f50189b..730daaf 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
+++ b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
@@ -8,8 +8,6 @@
 
 # Refine EventLogging Analytics (capsule based) data.
 role::analytics_cluster::refinery::job::json_refine_job { 
'eventlogging_analytics':
-# Temporarily disabled for T179625.
-ensure   => 'absent',
 input_base_path  => '/wmf/data/raw/eventlogging',
 input_regex  => 
'eventlogging_(.+)/hourly/(\\d+)/(\\d+)/(\\d+)/(\\d+)',
 input_capture=> 'table,year,month,day,hour',
diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp 
b/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
index d00b84c..db26c8d 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
+++ b/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
@@ -58,15 +58,28 @@
 default => "--send-email-report --to-emails ${email_to}"
 }
 
-$command = "PYTHONPATH=${refinery_path}/python 
${refinery_path}/bin/is-yarn-app-running ${job_name} || /usr/bin/spark-submit 
--master yarn --deploy-mode cluster --driver-memory ${spark_driver_memory} 
--conf spark.dynamicAllocation.maxExecutors=${spark_max_executors} --files 
/etc/hive/conf/hive-site.xml --class 
org.wikimedia.analytics.refinery.job.JsonRefine --name ${job_name} 
${_refinery_job_jar} --parallelism ${parallelism} --since ${since} 
${whitelist_blacklist_opt} ${email_opts} --input-base-path ${input_base_path} 
--input-regex '${input_regex}' --input-capture '${input_capture}' 
--output-base-path ${output_base_path} --database ${output_database} >> 
${log_file} 2>&1"
+# The command here can end up being pretty long, especially if the table 
whitelist
+# or blacklist is long.  Crontabs have a line length limit, so we render 
this
+# command into a script and then install that as the cron job.
+$refine_command = "PYTHONPATH=${refinery_path}/python 
${refinery_path}/bin/is-yarn-app-running ${job_name} || /usr/bin/spark-submit 
--master yarn --deploy-mode cluster --driver-memory ${spark_driver_memory} 
--conf spark.dynamicAllocation.maxExecutors=${spark_max_executors} --files 
/etc/hive/conf/hive-site.xml --class 
org.wikimedia.analytics.refinery.job.JsonRefine --name ${job_name} 
${_refinery_job_jar} --parallelism ${parallelism} --since ${since} 
${whitelist_blacklist_opt} ${email_opts} --input-base-path ${input_base_path} 
--input-regex '${input_regex}' --input-capture '${input_capture}' 
--output-base-path ${output_base_path} --database ${output_database}"
+$refine_script = "/usr/local/bin/${job_name}"
+file { $refine_script:
+ensure  => $ensure,
+content => $refine_command,
+owner   => 'root',
+group   => 'root',
+mode=> '0555',
+}
 
 cron { $job_name:
-command  => $command,
+ensure   => $ensure,
+command  => "${refine_script} >> ${log_file} 2>&1",
 user => $user,
 hour => $hour,
 minute   => $minute,
 month=> $month,
 monthday => $monthday,
 weekday  => $weekday,
+require  => File[$refine_script],
 }
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402072
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I9dd99efa15a24185d69277c7fb1674e1a1b2594d
Gerrit-PatchSet: 3
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Giuseppe Lavagetto 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use intermediate script for json refine jobs

2018-01-04 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402072 )

Change subject: Use intermediate script for json refine jobs
..

Use intermediate script for json refine jobs

JsonRefine commands can be too long for crontab if table blacklist
or whitelist is very long.  This renders a script into /usr/local/bin
that will be used in the crontab.

Change-Id: I9dd99efa15a24185d69277c7fb1674e1a1b2594d
---
M modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
1 file changed, 15 insertions(+), 2 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/72/402072/1

diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp 
b/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
index d00b84c..db26c8d 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
+++ b/modules/role/manifests/analytics_cluster/refinery/job/json_refine_job.pp
@@ -58,15 +58,28 @@
 default => "--send-email-report --to-emails ${email_to}"
 }
 
-$command = "PYTHONPATH=${refinery_path}/python 
${refinery_path}/bin/is-yarn-app-running ${job_name} || /usr/bin/spark-submit 
--master yarn --deploy-mode cluster --driver-memory ${spark_driver_memory} 
--conf spark.dynamicAllocation.maxExecutors=${spark_max_executors} --files 
/etc/hive/conf/hive-site.xml --class 
org.wikimedia.analytics.refinery.job.JsonRefine --name ${job_name} 
${_refinery_job_jar} --parallelism ${parallelism} --since ${since} 
${whitelist_blacklist_opt} ${email_opts} --input-base-path ${input_base_path} 
--input-regex '${input_regex}' --input-capture '${input_capture}' 
--output-base-path ${output_base_path} --database ${output_database} >> 
${log_file} 2>&1"
+# The command here can end up being pretty long, especially if the table 
whitelist
+# or blacklist is long.  Crontabs have a line length limit, so we render 
this
+# command into a script and then install that as the cron job.
+$refine_command = "PYTHONPATH=${refinery_path}/python 
${refinery_path}/bin/is-yarn-app-running ${job_name} || /usr/bin/spark-submit 
--master yarn --deploy-mode cluster --driver-memory ${spark_driver_memory} 
--conf spark.dynamicAllocation.maxExecutors=${spark_max_executors} --files 
/etc/hive/conf/hive-site.xml --class 
org.wikimedia.analytics.refinery.job.JsonRefine --name ${job_name} 
${_refinery_job_jar} --parallelism ${parallelism} --since ${since} 
${whitelist_blacklist_opt} ${email_opts} --input-base-path ${input_base_path} 
--input-regex '${input_regex}' --input-capture '${input_capture}' 
--output-base-path ${output_base_path} --database ${output_database}"
+$refine_script = "/usr/local/bin/${job_name}"
+file { $refine_script:
+ensure  => $ensure,
+content => $refine_command,
+owner   => 'root',
+group   => 'root',
+mode=> '0555',
+}
 
 cron { $job_name:
-command  => $command,
+ensure   => $ensure,
+command  => "${refine_script} >> ${log_file} 2>&1",
 user => $user,
 hour => $hour,
 minute   => $minute,
 month=> $month,
 monthday => $monthday,
 weekday  => $weekday,
+require  => File[$refine_script],
 }
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402072
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I9dd99efa15a24185d69277c7fb1674e1a1b2594d
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Refine mediawiki job queue events into Hive event database

2018-01-04 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402064 )

Change subject: Refine mediawiki job queue events into Hive event database
..


Refine mediawiki job queue events into Hive event database

Change-Id: I279aa9046d4a632183894d9d21893307962d4621
---
M modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
1 file changed, 42 insertions(+), 0 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp 
b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
index d1c7a3f..f50189b 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
+++ b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
@@ -30,4 +30,46 @@
 table_blacklist  => 
'^mediawiki_page_properties_change|mediawiki_recentchange$',
 minute   => 20,
 }
+
+# Refine Mediawiki job queue events (from EventBus).
+# This could be combined into the same EventBus refine job above, but it 
is nice to
+# have them separated, as the job queue schemas are legacy and can be 
problematic.
+
+# $problematic_jobs will not be refined.
+# These have inconsistent schemas that cause refinement to fail.
+$problematic_jobs = [
+'EchoNotificationJob',
+'EchoNotificationDeleteJob',
+'TranslationsUpdateJob',
+'MessageGroupStatesUpdaterJob',
+'InjectRCRecords',
+'cirrusSearchDeleteArchive',
+'enqueue',
+'htmlCacheUpdate',
+'LocalRenameUserJob',
+'RecordLintJob',
+'wikibase_addUsagesForPage',
+'refreshLinks',
+'cirrusSearchCheckerJob',
+'MassMessageSubmitJob',
+'refreshLinksPrioritized',
+'TranslatablePageMoveJob',
+'ORESFetchScoreJob',
+'PublishStashedFile',
+'CentralAuthCreateLocalAccountJob',
+'gwtoolsetUploadMediafileJob',
+]
+$table_blacklist = sprintf('.*(%s)$', join($problematic_jobs, '|'))
+
+role::analytics_cluster::refinery::job::json_refine_job { 
'eventlogging_eventbus_job_queue':
+# This is imported by camus_job { 'mediawiki_job': }
+input_base_path  => '/wmf/data/raw/mediawiki_job',
+# 'datacenter' is extracted from the input path into a Hive table 
partition
+input_regex  => 
'.*(eqiad|codfw)_(.+)/hourly/(\\d+)/(\\d+)/(\\d+)/(\\d+)',
+input_capture=> 'datacenter,table,year,month,day,hour',
+output_base_path => '/wmf/data/event',
+output_database  => 'event',
+table_blacklist  => $table_blacklist,
+minute   => 25,
+}
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402064
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I279aa9046d4a632183894d9d21893307962d4621
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Giuseppe Lavagetto 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Refine mediawiki job queue events into Hive event database

2018-01-04 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/402064 )

Change subject: Refine mediawiki job queue events into Hive event database
..

Refine mediawiki job queue events into Hive event database

Change-Id: I279aa9046d4a632183894d9d21893307962d4621
---
M modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
1 file changed, 42 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/64/402064/1

diff --git 
a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp 
b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
index d1c7a3f..f50189b 100644
--- a/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
+++ b/modules/role/manifests/analytics_cluster/refinery/job/json_refine.pp
@@ -30,4 +30,46 @@
 table_blacklist  => 
'^mediawiki_page_properties_change|mediawiki_recentchange$',
 minute   => 20,
 }
+
+# Refine Mediawiki job queue events (from EventBus).
+# This could be combined into the same EventBus refine job above, but it 
is nice to
+# have them separated, as the job queue schemas are legacy and can be 
problematic.
+
+# $problematic_jobs will not be refined.
+# These have inconsistent schemas that cause refinement to fail.
+$problematic_jobs = [
+'EchoNotificationJob',
+'EchoNotificationDeleteJob',
+'TranslationsUpdateJob',
+'MessageGroupStatesUpdaterJob',
+'InjectRCRecords',
+'cirrusSearchDeleteArchive',
+'enqueue',
+'htmlCacheUpdate',
+'LocalRenameUserJob',
+'RecordLintJob',
+'wikibase_addUsagesForPage',
+'refreshLinks',
+'cirrusSearchCheckerJob',
+'MassMessageSubmitJob',
+'refreshLinksPrioritized',
+'TranslatablePageMoveJob',
+'ORESFetchScoreJob',
+'PublishStashedFile',
+'CentralAuthCreateLocalAccountJob',
+'gwtoolsetUploadMediafileJob',
+]
+$table_blacklist = sprintf('.*(%s)$', join($problematic_jobs, '|'))
+
+role::analytics_cluster::refinery::job::json_refine_job { 
'eventlogging_eventbus_job_queue':
+# This is imported by camus_job { 'mediawiki_job': }
+input_base_path  => '/wmf/data/raw/mediawiki_job',
+# 'datacenter' is extracted from the input path into a Hive table 
partition
+input_regex  => 
'.*(eqiad|codfw)_(.+)/hourly/(\\d+)/(\\d+)/(\\d+)/(\\d+)',
+input_capture=> 'datacenter,table,year,month,day,hour',
+output_base_path => '/wmf/data/event',
+output_database  => 'event',
+table_blacklist  => $table_blacklist,
+minute   => 25,
+}
 }

-- 
To view, visit https://gerrit.wikimedia.org/r/402064
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I279aa9046d4a632183894d9d21893307962d4621
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: cache_canary: use main Kafka cluster(s)

2018-01-04 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/402061 )

Change subject: cache_canary: use main Kafka cluster(s)
..


cache_canary: use main Kafka cluster(s)

The change introduced in 31874a8 for cache_text should be applied to
cache_canary too.

Change-Id: Iec83fb3acb34806409c40751fe11769824adbc25
---
M hieradata/role/common/cache/canary.yaml
1 file changed, 1 insertion(+), 0 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/hieradata/role/common/cache/canary.yaml 
b/hieradata/role/common/cache/canary.yaml
index 1e844c0..40bb4c2 100644
--- a/hieradata/role/common/cache/canary.yaml
+++ b/hieradata/role/common/cache/canary.yaml
@@ -94,3 +94,4 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+profile::cache::kafka::statsv::kafka_cluster_name: main-eqiad

-- 
To view, visit https://gerrit.wikimedia.org/r/402061
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Iec83fb3acb34806409c40751fe11769824adbc25
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ema 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Force more exact protocol version for varnishkafka statsv

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401778 )

Change subject: Force more exact protocol version for varnishkafka statsv
..


Force more exact protocol version for varnishkafka statsv

Bug: T179093
Bug: T172681

Change-Id: I0c1b7ac86e70b85700ee888253f2bab38554808d
---
M modules/profile/manifests/cache/kafka/statsv.pp
1 file changed, 3 insertions(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/modules/profile/manifests/cache/kafka/statsv.pp 
b/modules/profile/manifests/cache/kafka/statsv.pp
index ccacf42..bae8872 100644
--- a/modules/profile/manifests/cache/kafka/statsv.pp
+++ b/modules/profile/manifests/cache/kafka/statsv.pp
@@ -38,7 +38,9 @@
 varnish_opts=> { 'q' => 'ReqURL ~ "^/beacon/statsv\?"' 
},
 # -1 means all brokers in the ISR must ACK this request.
 topic_request_required_acks => '-1',
-force_protocol_version  => $kafka_config['api_version'],
+# Force more exact protocol version.
+# TODO: can we change this in common.yaml kafka_clusters hash?
+force_protocol_version  => '0.9.0.1',
 }
 
 # Make sure varnishes are configured and started for the first time

-- 
To view, visit https://gerrit.wikimedia.org/r/401778
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I0c1b7ac86e70b85700ee888253f2bab38554808d
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Force more exact protocol version for varnishkafka statsv

2018-01-03 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401778 )

Change subject: Force more exact protocol version for varnishkafka statsv
..

Force more exact protocol version for varnishkafka statsv

Bug: T179093 T172681
Change-Id: I0c1b7ac86e70b85700ee888253f2bab38554808d
---
M modules/profile/manifests/cache/kafka/statsv.pp
1 file changed, 3 insertions(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/78/401778/1

diff --git a/modules/profile/manifests/cache/kafka/statsv.pp 
b/modules/profile/manifests/cache/kafka/statsv.pp
index ccacf42..bae8872 100644
--- a/modules/profile/manifests/cache/kafka/statsv.pp
+++ b/modules/profile/manifests/cache/kafka/statsv.pp
@@ -38,7 +38,9 @@
 varnish_opts=> { 'q' => 'ReqURL ~ "^/beacon/statsv\?"' 
},
 # -1 means all brokers in the ISR must ACK this request.
 topic_request_required_acks => '-1',
-force_protocol_version  => $kafka_config['api_version'],
+# Force more exact protocol version.
+# TODO: can we change this in common.yaml kafka_clusters hash?
+force_protocol_version  => '0.9.0.1',
 }
 
 # Make sure varnishes are configured and started for the first time

-- 
To view, visit https://gerrit.wikimedia.org/r/401778
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I0c1b7ac86e70b85700ee888253f2bab38554808d
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...prometheus-jmx-exporter[master]: debian: tweak gbp config

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401759 )

Change subject: debian: tweak gbp config
..


debian: tweak gbp config

Remove default and not needed flags

Change-Id: I65a59d9d0dfde5b773a678c888f3d870aaa452c4
---
M debian/gbp.conf
1 file changed, 0 insertions(+), 4 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/debian/gbp.conf b/debian/gbp.conf
index 2795046..9cf0067 100644
--- a/debian/gbp.conf
+++ b/debian/gbp.conf
@@ -1,7 +1,3 @@
 [buildpackage]
-upstream-tree=tag
-upstream-branch=master
 debian-branch=debian
 upstream-tag=parent-%(version)s
-debian-tag=debian/%(version)s
-builder=GIT_PBUILDER_AUTOCONF=no git-pbuilder -sa

-- 
To view, visit https://gerrit.wikimedia.org/r/401759
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I65a59d9d0dfde5b773a678c888f3d870aaa452c4
Gerrit-PatchSet: 1
Gerrit-Project: operations/debs/prometheus-jmx-exporter
Gerrit-Branch: master
Gerrit-Owner: Filippo Giunchedi 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...prometheus-jmx-exporter[master]: debian: force maven repo directory

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401758 )

Change subject: debian: force maven repo directory
..


debian: force maven repo directory

When running in cowbuilder $HOME is set to /nonexistent, and maven will fail to
create $HOME/.m2

Change-Id: Ia730eff34793f591b4d5985052b8d572dc09e1eb
---
M debian/rules
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/debian/rules b/debian/rules
index 7e8aeb7..bcd19c2 100755
--- a/debian/rules
+++ b/debian/rules
@@ -11,7 +11,7 @@
 
 
 override_dh_auto_build:
-   mvn package -DskipTests
+   mvn package -DskipTests -Dmaven.repo.local=/tmp/.m2
dh_auto_build
 
 override_dh_install:

-- 
To view, visit https://gerrit.wikimedia.org/r/401758
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia730eff34793f591b4d5985052b8d572dc09e1eb
Gerrit-PatchSet: 1
Gerrit-Project: operations/debs/prometheus-jmx-exporter
Gerrit-Branch: master
Gerrit-Owner: Filippo Giunchedi 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Move statsv varnishkafka and service to use main Kafka clust...

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/391705 )

Change subject: Move statsv varnishkafka and service to use main Kafka 
cluster(s)
..


Move statsv varnishkafka and service to use main Kafka cluster(s)

This gives statsv active/passive multi-DC support.

After much IRC discussion, this allows for an active/passive statsv
backed by the main Kafka clusters in eqiad and codfw.  varnishkafkas
in all DCs produce statsv messages to the Kafka cluster name specified
by profile::cache::kafka::statsv::kafka_cluster_name, which in production
is set to 'main-eqiad'.

statsv consumer instances will run in eqiad and codfw and consume
from their local main Kafka clusters.  Since statsd is active/passive
those statsv consumer instances will produce to the same active statsd
instance, independent of which datacenter they run in.  I.e.
if statsd is active in eqiad, both statsv in eqiad (consuming
from main-eqiad) and statsv in codfw (consuming from main-codfw)
will produce to statsd in eqiad.

However, since all statsv varnishkafkas produce to the same
Kafka cluster in an 'active' DC, only one statsv instance
will have any messages to consume at any given time.

If you plan to move the active statsd instance away from
main-eqiad for an extended (permanent?) period of time, you should
also change the value of profile::cache::kafka::statsv::kafka_cluster_name.

Or, if you need to do maintenance on statsv for an extended period of time,
you could route all varnishkafka produced statsv messages to e.g. main-codfw,
and shut down the eqiad statsv consumers, and still get statsv messages
in statsd.

Bug: T179093
Change-Id: I6c566c19fcdab004eec21384e6a5c136b3cf699c
---
M hieradata/role/common/cache/text.yaml
A modules/profile/manifests/cache/kafka/statsv.pp
M modules/profile/manifests/cache/text.pp
M modules/profile/manifests/webperf.pp
M modules/role/lib/puppet/parser/functions/kafka_cluster_name.rb
D modules/role/manifests/cache/kafka/statsv.pp
M modules/role/manifests/cache/text.pp
M modules/webperf/manifests/statsv.pp
M modules/webperf/templates/statsv.service.erb
9 files changed, 107 insertions(+), 84 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/hieradata/role/common/cache/text.yaml 
b/hieradata/role/common/cache/text.yaml
index 23ecc41..40e5c5d 100644
--- a/hieradata/role/common/cache/text.yaml
+++ b/hieradata/role/common/cache/text.yaml
@@ -99,3 +99,11 @@
 # Profile::cache::ssl::unified
 profile::cache::ssl::unified::monitoring: true
 profile::cache::ssl::unified::letsencrypt: false
+
+# This should match an entry in the kafka_clusters hash (defined in 
common.yaml).
+# We use the fully qualified kafka cluster name (with datacenter), because we 
want
+# to route all statsv -> statsd traffic to the datacenter that hosts the master
+# statsd instance.  If the active statsd instance changes to codfw (for an 
extended period of time)
+# should probably change this to main-codfw.  If you don't things will 
probably be fine,
+# but statsv will have to send messages over UDP cross-DC to the active statsd 
instance.
+profile::cache::kafka::statsv::kafka_cluster_name: main-eqiad
diff --git a/modules/profile/manifests/cache/kafka/statsv.pp 
b/modules/profile/manifests/cache/kafka/statsv.pp
new file mode 100644
index 000..ccacf42
--- /dev/null
+++ b/modules/profile/manifests/cache/kafka/statsv.pp
@@ -0,0 +1,62 @@
+# === Class profile::cache::kafka::statsv
+#
+# Sets up a varnishkafka logging endpoint for collecting
+# application level metrics. We are calling this system
+# statsv, as it is similar to statsd, but uses varnish
+# as its logging endpoint.
+#
+# === Parameters
+#
+# [*cache_cluster*]
+#   Used in when naming varnishkafka metrics.
+#   Default:  hiera('cache::cluster')
+#
+# [*kafka_cluster_name*]
+#   The name of the kafka cluster to use from the kafka_clusters hiera 
variable.
+#   Since only one statsd instance is active at any given time, you should 
probably
+#   set this explicitly to a fully qualified kafka cluster name (with DC 
suffix) that
+#   is located in the same DC as the active statsd instance.
+#
+class profile::cache::kafka::statsv(
+$cache_cluster  = hiera('cache::cluster'),
+$kafka_cluster_name = 
hiera('profile::cache::kafka::statsv::kafka_cluster_name')
+)
+{
+$kafka_config  = kafka_config($kafka_cluster_name)
+$kafka_brokers = $kafka_config['brokers']['array']
+
+$format  = "%{fake_tag0@hostname?${::fqdn}}x %{%FT%T@dt}t 
%{X-Client-IP@ip}o %{@uri_path}U %{@uri_query}q %{User-Agent@user_agent}i"
+
+varnishkafka::instance { 'statsv':
+brokers => $kafka_brokers,
+format  => $format,
+format_type => 'json',
+topic   => 'statsv',
+varnish_name=> 'frontend',
+

[MediaWiki-commits] [Gerrit] analytics/statsv[master]: Have to subscribe if using multiple kafka topics.

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401750 )

Change subject: Have to subscribe if using multiple kafka topics.
..


Have to subscribe if using multiple kafka topics.

Bug: T179093
Change-Id: I33be897a9170836de6d55fa3cb7a163dc988fce9
---
M statsv.py
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved



diff --git a/statsv.py b/statsv.py
index 627d79b..9d0b591 100644
--- a/statsv.py
+++ b/statsv.py
@@ -204,7 +204,6 @@
 
 # Create our Kafka Consumer instance.
 consumer = KafkaConsumer(
-kafka_topics,
 bootstrap_servers=kafka_bootstrap_servers,
 group_id=kafka_consumer_group,
 auto_offset_reset='latest',
@@ -213,6 +212,7 @@
 enable_auto_commit=False,
 consumer_timeout_ms=kafka_consumer_timeout_seconds * 1000
 )
+consumer.subscribe(kafka_topics)
 
 watchdog = Watchdog()
 

-- 
To view, visit https://gerrit.wikimedia.org/r/401750
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I33be897a9170836de6d55fa3cb7a163dc988fce9
Gerrit-PatchSet: 1
Gerrit-Project: analytics/statsv
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] analytics/statsv[master]: Have to subscribe if using multiple kafka topics.

2018-01-03 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401750 )

Change subject: Have to subscribe if using multiple kafka topics.
..

Have to subscribe if using multiple kafka topics.

Bug: T179093
Change-Id: I33be897a9170836de6d55fa3cb7a163dc988fce9
---
M statsv.py
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/analytics/statsv 
refs/changes/50/401750/1

diff --git a/statsv.py b/statsv.py
index 627d79b..9d0b591 100644
--- a/statsv.py
+++ b/statsv.py
@@ -204,7 +204,6 @@
 
 # Create our Kafka Consumer instance.
 consumer = KafkaConsumer(
-kafka_topics,
 bootstrap_servers=kafka_bootstrap_servers,
 group_id=kafka_consumer_group,
 auto_offset_reset='latest',
@@ -213,6 +212,7 @@
 enable_auto_commit=False,
 consumer_timeout_ms=kafka_consumer_timeout_seconds * 1000
 )
+consumer.subscribe(kafka_topics)
 
 watchdog = Watchdog()
 

-- 
To view, visit https://gerrit.wikimedia.org/r/401750
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I33be897a9170836de6d55fa3cb7a163dc988fce9
Gerrit-PatchSet: 1
Gerrit-Project: analytics/statsv
Gerrit-Branch: master
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] analytics/statsv[master]: Support consumption from multiple topics

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/391703 )

Change subject: Support consumption from multiple topics
..


Support consumption from multiple topics

This will allow for DC prefixed topics for statsv

Bug: T179093
Change-Id: I1ee995a5b22a84e5c84f356494c22df3a4b0e03e
---
M statsv.py
1 file changed, 5 insertions(+), 4 deletions(-)

Approvals:
  Krinkle: Looks good to me, approved
  Ottomata: Verified



diff --git a/statsv.py b/statsv.py
index afe7e9a..627d79b 100644
--- a/statsv.py
+++ b/statsv.py
@@ -39,8 +39,8 @@
 description='statsv - consumes from varnishkafka Kafka topic and writes 
metrics to statsd'
 )
 ap.add_argument(
-'--topic',
-help='Kafka topic from which to consume.  Default: statsv',
+'--topics',
+help='Comma separated list of Kafka topics from which to consume.  
Default: statsv',
 default='statsv'
 )
 ap.add_argument(
@@ -110,7 +110,8 @@
 worker_count = args.workers
 
 kafka_bootstrap_servers = tuple(args.brokers.split(','))
-kafka_topic = args.topic
+kafka_topics = args.topics.split(',')
+
 kafka_consumer_group = args.consumer_group
 kafka_consumer_timeout_seconds = args.consumer_timeout_seconds
 
@@ -203,7 +204,7 @@
 
 # Create our Kafka Consumer instance.
 consumer = KafkaConsumer(
-kafka_topic,
+kafka_topics,
 bootstrap_servers=kafka_bootstrap_servers,
 group_id=kafka_consumer_group,
 auto_offset_reset='latest',

-- 
To view, visit https://gerrit.wikimedia.org/r/391703
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I1ee995a5b22a84e5c84f356494c22df3a4b0e03e
Gerrit-PatchSet: 1
Gerrit-Project: analytics/statsv
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Krinkle 
Gerrit-Reviewer: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Set cipher.suites and ssl.enabled.protocols for jumbo and va...

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/399700 )

Change subject: Set cipher.suites and ssl.enabled.protocols for jumbo and 
varnishkafka (canary)
..


Set cipher.suites and ssl.enabled.protocols for jumbo and varnishkafka (canary)

Also remove hardcoded Kafka brokers from 
profile::cache::kafka::webrequest::jumbo
now that kafka_config.rb supports 'ssl_array' entry

Bug: T167304
Change-Id: I39e62f7d13a9b379f45cfb9c6bad8b7a6ebd1e88
---
M modules/confluent/manifests/kafka/broker.pp
M modules/confluent/templates/kafka/server.properties.erb
M modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
M modules/profile/manifests/kafka/broker.pp
M modules/varnishkafka
5 files changed, 29 insertions(+), 13 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/confluent/manifests/kafka/broker.pp 
b/modules/confluent/manifests/kafka/broker.pp
index 0cfb32a..10037da 100644
--- a/modules/confluent/manifests/kafka/broker.pp
+++ b/modules/confluent/manifests/kafka/broker.pp
@@ -37,9 +37,17 @@
 # [*ssl_truststore_password*]
 #   The password for the trust store file.  Default: undef
 #
-# [*ssl_client_auth*]
+# [*ssl_client_auth]
 #   Configures kafka broker to request client authentication.  Must be one of
 #   'none', 'requested', or 'required'.  Default: undef
+#
+# [*ssl_enabled_protocols**]
+#   Comma separated string of enabled ssl protocols that will be accepted from 
clients
+#  e.g. TLSv1.2,TLSv1.1,TLSv1.  Default: undef
+#
+# [*ssl_cipher_suites*]
+#   Comma separated string of cipher suites that will be accepted from clients.
+#   Default: undef
 #
 # [*log_dirs*]
 #   Array of directories in which the broker will store its received message
@@ -247,6 +255,8 @@
 $ssl_truststore_location = undef,
 $ssl_truststore_password = undef,
 $ssl_client_auth = undef,
+$ssl_enabled_protocols   = undef,
+$ssl_cipher_suites   = undef,
 
 $log_dirs= ['/var/spool/kafka'],
 
diff --git a/modules/confluent/templates/kafka/server.properties.erb 
b/modules/confluent/templates/kafka/server.properties.erb
index 624e9fb..e8feb80 100644
--- a/modules/confluent/templates/kafka/server.properties.erb
+++ b/modules/confluent/templates/kafka/server.properties.erb
@@ -78,6 +78,12 @@
 <% if @ssl_truststore_password -%>
 ssl.truststore.password=<%= @ssl_truststore_password %>
 <% end -%>
+<% if @ssl_enabled_protocols -%>
+ssl.enabled.protocols=<%= @ssl_enabled_protocols %>
+<% end -%>
+<% if @ssl_cipher_suites -%>
+ssl.cipher.suites=<%= @ssl_cipher_suites %>
+<% end -%>
 
 <% if @ssl_client_auth -%>
 ssl.client.auth=<%= @ssl_client_auth %>
diff --git a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp 
b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
index f7f5218..c97f74b 100644
--- a/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
+++ b/modules/profile/manifests/cache/kafka/webrequest/jumbo.pp
@@ -23,17 +23,8 @@
 ) {
 $config = kafka_config('jumbo-eqiad')
 
-# FIXME: Temporary workaround to force varnishkafka to use the TLS port of
-# Kafka Jumbo. This will probably be handled in the future via 
kafka_config.rb
-#$kafka_brokers = $config['brokers']['array']
-$kafka_brokers = [
-'kafka-jumbo1001.eqiad.wmnet:9093',
-'kafka-jumbo1002.eqiad.wmnet:9093',
-'kafka-jumbo1003.eqiad.wmnet:9093',
-'kafka-jumbo1004.eqiad.wmnet:9093',
-'kafka-jumbo1005.eqiad.wmnet:9093',
-'kafka-jumbo1006.eqiad.wmnet:9093',
-]
+# Array of kafka brokers in jumbo-eqiad with SSL port 9093
+$kafka_brokers = $config['brokers']['ssl_array']
 
 $topic = "webrequest_${cache_cluster}_test"
 # These used to be parameters, but I don't really see why given we never 
change
@@ -73,6 +64,7 @@
 
 $ssl_certificate_secrets_path = 
'certificates/varnishkafka/varnishkafka.crt.pem'
 $ssl_certificate_location = "${ssl_location}/varnishkafka.crt.pem"
+$ssl_cipher_suites = 'ECDHE-ECDSA-AES256-GCM-SHA384'
 
 file { $ssl_location:
 ensure => 'directory',
@@ -145,6 +137,7 @@
 ssl_key_password => $ssl_key_password,
 ssl_key_location => $ssl_key_location,
 ssl_certificate_location => $ssl_certificate_location,
+ssl_cipher_suites=> $ssl_cipher_suites,
 require  => [
 File[$ssl_key_location],
 File[$ssl_certificate_location]
diff --git a/modules/profile/manifests/kafka/broker.pp 
b/modules/profile/manifests/kafka/broker.pp
index 9fe7bcb..33109ab 100644
--- a/modules/profile/manifests/kafka/broker.pp
+++ b/modules/profile/manifests/kafka/broker.pp
@@ -183,6 +183,9 @@
 $ssl_truststore_secrets_path= 

[MediaWiki-commits] [Gerrit] operations/puppet[production]: Bump specified kafka version to 1.0.0-1

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401737 )

Change subject: Bump specified kafka version to 1.0.0-1
..


Bump specified kafka version to 1.0.0-1

It is already upgraded, but puppet was ensuring the wrong version

Change-Id: I066511d71f655dcddd37fb27703dd63be7a9a7f4
---
M modules/profile/manifests/kafka/broker.pp
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/profile/manifests/kafka/broker.pp 
b/modules/profile/manifests/kafka/broker.pp
index 019c994..9fe7bcb 100644
--- a/modules/profile/manifests/kafka/broker.pp
+++ b/modules/profile/manifests/kafka/broker.pp
@@ -229,7 +229,7 @@
 # TODO: These should be removed once they are
 # the default in ::confluent::kafka module
 scala_version => '2.11',
-kafka_version => '0.11.0.1-1',
+kafka_version => '1.0.0-1',
 java_home => '/usr/lib/jvm/java-8-openjdk-amd64',
 }
 

-- 
To view, visit https://gerrit.wikimedia.org/r/401737
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I066511d71f655dcddd37fb27703dd63be7a9a7f4
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Bump specified kafka version to 1.0.0-1

2018-01-03 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401737 )

Change subject: Bump specified kafka version to 1.0.0-1
..

Bump specified kafka version to 1.0.0-1

It is already upgraded, but puppet was ensuring the wrong version

Change-Id: I066511d71f655dcddd37fb27703dd63be7a9a7f4
---
M modules/profile/manifests/kafka/broker.pp
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/37/401737/1

diff --git a/modules/profile/manifests/kafka/broker.pp 
b/modules/profile/manifests/kafka/broker.pp
index 019c994..9fe7bcb 100644
--- a/modules/profile/manifests/kafka/broker.pp
+++ b/modules/profile/manifests/kafka/broker.pp
@@ -229,7 +229,7 @@
 # TODO: These should be removed once they are
 # the default in ::confluent::kafka module
 scala_version => '2.11',
-kafka_version => '0.11.0.1-1',
+kafka_version => '1.0.0-1',
 java_home => '/usr/lib/jvm/java-8-openjdk-amd64',
 }
 

-- 
To view, visit https://gerrit.wikimedia.org/r/401737
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I066511d71f655dcddd37fb27703dd63be7a9a7f4
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add ssl_array and ssl_string entries to kafka_config

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/398863 )

Change subject: Add ssl_array and ssl_string entries to kafka_config
..


Add ssl_array and ssl_string entries to kafka_config

This makes it easier for kafka clients to choose if they communicate
with Kafka over SSL.

Change-Id: I3ed69794f7153760e1c54c23c79a2ec014e75a48
---
M modules/role/lib/puppet/parser/functions/kafka_config.rb
1 file changed, 16 insertions(+), 6 deletions(-)

Approvals:
  Ottomata: Verified; Looks good to me, approved
  Elukey: Looks good to me, but someone else must approve



diff --git a/modules/role/lib/puppet/parser/functions/kafka_config.rb 
b/modules/role/lib/puppet/parser/functions/kafka_config.rb
index 11bc44e..ffbe1d9 100644
--- a/modules/role/lib/puppet/parser/functions/kafka_config.rb
+++ b/modules/role/lib/puppet/parser/functions/kafka_config.rb
@@ -67,17 +67,27 @@
 # These are the zookeeper hosts for this kafka cluster.
 zk_hosts = zk_clusters[zk_cluster_name]['hosts'].keys.sort
 
+default_port = 9092
+default_ssl_port = 9093
 jmx_port = ''
+
 config = {
   'name'  => cluster_name,
   'brokers'   => {
-'hash' => brokers,
-'array'=> brokers.keys.sort,
-# list of comma-separated host:port broker pairs
-'string'   => brokers.map { |host, conf| "#{host}:#{conf['port'] || 
9092}" }.sort.join(','),
+'hash'   => brokers,
+# array of broker hostnames without port.  TODO: change this to use 
host:port?
+'array'  => brokers.keys.sort,
+# string list of comma-separated host:port broker
+'string' => brokers.map { |host, conf| "#{host}:#{conf['port'] || 
default_port}" }.sort.join(','),
+
+# array host:ssl_port brokers
+'ssl_array'  => brokers.map { |host, conf| "#{host}:#{conf['ssl_port'] 
|| default_ssl_port}" }.sort,
+# string list of comma-separated host:ssl_port brokers
+'ssl_string' => brokers.map { |host, conf| "#{host}:#{conf['ssl_port'] 
|| default_ssl_port}" }.sort.join(','),
+
 # list of comma-separated host_ broker pairs used as graphite 
wildcards
-'graphite' => "{#{brokers.keys.map { |b| "#{b.tr '.', 
'_'}_#{jmx_port}" }.sort.join(',')}}",
-'size' => brokers.keys.size
+'graphite'   => "{#{brokers.keys.map { |b| "#{b.tr '.', 
'_'}_#{jmx_port}" }.sort.join(',')}}",
+'size'   => brokers.keys.size
   },
   'jmx_port'  => jmx_port,
   'zookeeper' => {

-- 
To view, visit https://gerrit.wikimedia.org/r/398863
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I3ed69794f7153760e1c54c23c79a2ec014e75a48
Gerrit-PatchSet: 3
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Giuseppe Lavagetto 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations...varnishkafka[master]: Parameterize kafka.ssl.cipher.suites

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/399689 )

Change subject: Parameterize kafka.ssl.cipher.suites
..


Parameterize kafka.ssl.cipher.suites

Bug: T177225
Change-Id: Ie4dafe2a0323428b66042a126cfa0bdbaa01bec3
---
M manifests/instance.pp
M templates/varnishkafka.conf.erb
2 files changed, 9 insertions(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/manifests/instance.pp b/manifests/instance.pp
index 8faca33..b8c847a 100644
--- a/manifests/instance.pp
+++ b/manifests/instance.pp
@@ -93,6 +93,10 @@
 # $ssl_certificate_location - Full path of the SSL client certificate.
 # Default: undef
 #
+# $ssl_cipher_suites- Comma separated string of cipher suites 
that are permitted to
+# be used for SSL communication with 
brokers.  This must match
+# at least one of the cipher suites 
allowed by the brokers.
+#
 define varnishkafka::instance(
 $brokers= ['localhost:9092'],
 $topic  = 'varnish',
@@ -139,6 +143,7 @@
 $ssl_key_password   = undef,
 $ssl_key_location   = undef,
 $ssl_certificate_location   = undef,
+$ssl_cipher_suites  = undef,
 ) {
 require ::varnishkafka
 
diff --git a/templates/varnishkafka.conf.erb b/templates/varnishkafka.conf.erb
index df797ca..927ef5e 100644
--- a/templates/varnishkafka.conf.erb
+++ b/templates/varnishkafka.conf.erb
@@ -279,4 +279,7 @@
 kafka.ssl.key.password=<%= @ssl_key_password %>
 kafka.ssl.key.location=<%= @ssl_key_location %>
 kafka.ssl.certificate.location=<%= @ssl_certificate_location %>
-<% end -%>
\ No newline at end of file
+<% if @ssl_cipher_suites -%>
+kafka.ssl.cipher.suites=<%= @ssl_cipher_suites %>
+<% end -%>
+<% end -%>

-- 
To view, visit https://gerrit.wikimedia.org/r/399689
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ie4dafe2a0323428b66042a126cfa0bdbaa01bec3
Gerrit-PatchSet: 3
Gerrit-Project: operations/puppet/varnishkafka
Gerrit-Branch: master
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add klog alias to otto's bash aliases for tailing kafka logs

2018-01-03 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401732 )

Change subject: Add klog alias to otto's bash aliases for tailing kafka logs
..


Add klog alias to otto's bash aliases for tailing kafka logs

Change-Id: I5d84d7383b496d0a5296b4a59c4a793242038d39
---
M modules/admin/files/home/otto/.bash_aliases
1 file changed, 2 insertions(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/admin/files/home/otto/.bash_aliases 
b/modules/admin/files/home/otto/.bash_aliases
index 418b077..efdc61c 100644
--- a/modules/admin/files/home/otto/.bash_aliases
+++ b/modules/admin/files/home/otto/.bash_aliases
@@ -4,4 +4,5 @@
 alias cdr='cd /srv/deployment/analytics/refinery'
 alias hproxy="export http_proxy=http://webproxy.eqiad.wmnet:8080; export 
HTTPS_PROXY=http://webproxy.eqiad.wmnet:8080;;
 alias slog='sudo tail -n 200 -f /var/log/syslog'
-alias pvl='pv -l > /dev/null'
\ No newline at end of file
+alias pvl='pv -l > /dev/null'
+alias klog='sudo tail -f /var/log/kafka/server.log 
/var/log/kafka/kafka-authorizer.log'

-- 
To view, visit https://gerrit.wikimedia.org/r/401732
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I5d84d7383b496d0a5296b4a59c4a793242038d39
Gerrit-PatchSet: 2
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Alex Monk 
Gerrit-Reviewer: Elukey 
Gerrit-Reviewer: Muehlenhoff 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Add klog alias to otto's bash aliases for tailing kafka logs

2018-01-03 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401732 )

Change subject: Add klog alias to otto's bash aliases for tailing kafka logs
..

Add klog alias to otto's bash aliases for tailing kafka logs

Change-Id: I5d84d7383b496d0a5296b4a59c4a793242038d39
---
M modules/admin/files/home/otto/.bash_aliases
1 file changed, 2 insertions(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/32/401732/1

diff --git a/modules/admin/files/home/otto/.bash_aliases 
b/modules/admin/files/home/otto/.bash_aliases
index 418b077..3fd414b 100644
--- a/modules/admin/files/home/otto/.bash_aliases
+++ b/modules/admin/files/home/otto/.bash_aliases
@@ -4,4 +4,5 @@
 alias cdr='cd /srv/deployment/analytics/refinery'
 alias hproxy="export http_proxy=http://webproxy.eqiad.wmnet:8080; export 
HTTPS_PROXY=http://webproxy.eqiad.wmnet:8080;;
 alias slog='sudo tail -n 200 -f /var/log/syslog'
-alias pvl='pv -l > /dev/null'
\ No newline at end of file
+alias pvl='pv -l > /dev/null'
+alias klog='sudo tail -f /var/log/kafka/server.log 
/var/log/kafka/kafka-authorizor.log'

-- 
To view, visit https://gerrit.wikimedia.org/r/401732
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I5d84d7383b496d0a5296b4a59c4a793242038d39
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Dont' expand wildcards in kafka acls command

2018-01-02 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401621 )

Change subject: Dont' expand wildcards in kafka acls command
..


Dont' expand wildcards in kafka acls command

Bug: T167304
Change-Id: I81b9396f73bbd2f67a108d05481e21c28d40f9bf
---
M modules/confluent/files/kafka/kafka.sh
1 file changed, 4 insertions(+), 2 deletions(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/confluent/files/kafka/kafka.sh 
b/modules/confluent/files/kafka/kafka.sh
index ccb91fe..e2c1c8b 100755
--- a/modules/confluent/files/kafka/kafka.sh
+++ b/modules/confluent/files/kafka/kafka.sh
@@ -112,5 +112,7 @@
 echo "${zookeeper_connect_commands}" | /bin/grep -q "${command}" && 
EXTRA_OPTS="${EXTRA_OPTS}${ZOOKEEPER_CONNECT_OPT} "
 
 # Print out the command we are about to exec, and then run it
-echo "${command} ${EXTRA_OPTS}$@"
-${command} ${EXTRA_OPTS}$@
+# set -f to not expand wildcards in command, e.g. --topic '*'
+set -f
+echo ${command} ${EXTRA_OPTS}"$@"
+${command} ${EXTRA_OPTS}"$@"

-- 
To view, visit https://gerrit.wikimedia.org/r/401621
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I81b9396f73bbd2f67a108d05481e21c28d40f9bf
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Dont' expand wildcards in kafka acls command

2018-01-02 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401621 )

Change subject: Dont' expand wildcards in kafka acls command
..

Dont' expand wildcards in kafka acls command

Bug: T167304
Change-Id: I81b9396f73bbd2f67a108d05481e21c28d40f9bf
---
M modules/confluent/files/kafka/kafka.sh
1 file changed, 4 insertions(+), 2 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/21/401621/1

diff --git a/modules/confluent/files/kafka/kafka.sh 
b/modules/confluent/files/kafka/kafka.sh
index ccb91fe..e2c1c8b 100755
--- a/modules/confluent/files/kafka/kafka.sh
+++ b/modules/confluent/files/kafka/kafka.sh
@@ -112,5 +112,7 @@
 echo "${zookeeper_connect_commands}" | /bin/grep -q "${command}" && 
EXTRA_OPTS="${EXTRA_OPTS}${ZOOKEEPER_CONNECT_OPT} "
 
 # Print out the command we are about to exec, and then run it
-echo "${command} ${EXTRA_OPTS}$@"
-${command} ${EXTRA_OPTS}$@
+# set -f to not expand wildcards in command, e.g. --topic '*'
+set -f
+echo ${command} ${EXTRA_OPTS}"$@"
+${command} ${EXTRA_OPTS}"$@"

-- 
To view, visit https://gerrit.wikimedia.org/r/401621
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I81b9396f73bbd2f67a108d05481e21c28d40f9bf
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Revert back to correct confluent VerifyRelease key in updates

2018-01-02 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401606 )

Change subject: Revert back to correct confluent VerifyRelease key in updates
..


Revert back to correct confluent VerifyRelease key in updates

This was not my problem

Change-Id: Ieef72e7ed753b160cf2fc832c25aa1a088db246f
---
M modules/aptrepo/files/updates
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/aptrepo/files/updates b/modules/aptrepo/files/updates
index 0c3fe29..e5381fe 100644
--- a/modules/aptrepo/files/updates
+++ b/modules/aptrepo/files/updates
@@ -157,7 +157,7 @@
 UDebComponents:
 Suite: stable
 Architectures: amd64
-VerifyRelease: 28A1C275F9F8725B
+VerifyRelease: 670540C841468433
 ListShellHook: grep-dctrl -e -P '^confluent-kafka-2\.11' || [ $? -eq 1 ]
 
 Name: docker

-- 
To view, visit https://gerrit.wikimedia.org/r/401606
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ieef72e7ed753b160cf2fc832c25aa1a088db246f
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Revert back to correct confluent VerifyRelease key in updates

2018-01-02 Thread Ottomata (Code Review)
Ottomata has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/401606 )

Change subject: Revert back to correct confluent VerifyRelease key in updates
..

Revert back to correct confluent VerifyRelease key in updates

This was not my problem

Change-Id: Ieef72e7ed753b160cf2fc832c25aa1a088db246f
---
M modules/aptrepo/files/updates
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/06/401606/1

diff --git a/modules/aptrepo/files/updates b/modules/aptrepo/files/updates
index 0c3fe29..e5381fe 100644
--- a/modules/aptrepo/files/updates
+++ b/modules/aptrepo/files/updates
@@ -157,7 +157,7 @@
 UDebComponents:
 Suite: stable
 Architectures: amd64
-VerifyRelease: 28A1C275F9F8725B
+VerifyRelease: 670540C841468433
 ListShellHook: grep-dctrl -e -P '^confluent-kafka-2\.11' || [ $? -eq 1 ]
 
 Name: docker

-- 
To view, visit https://gerrit.wikimedia.org/r/401606
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ieef72e7ed753b160cf2fc832c25aa1a088db246f
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/puppet[production]: Use key from apt-key list --with-colons for confluent 4.0 re...

2018-01-02 Thread Ottomata (Code Review)
Ottomata has submitted this change and it was merged. ( 
https://gerrit.wikimedia.org/r/401598 )

Change subject: Use key from apt-key list --with-colons for confluent 4.0 
reprepro updates
..


Use key from apt-key list --with-colons for confluent 4.0 reprepro updates

Got this from apt-key list --with-colons on a machine where I added confluent's 
key

Change-Id: Ida3ebcda31a267c3147b72f82e0437f7141ad74f
---
M modules/aptrepo/files/updates
1 file changed, 1 insertion(+), 1 deletion(-)

Approvals:
  Ottomata: Looks good to me, approved
  jenkins-bot: Verified



diff --git a/modules/aptrepo/files/updates b/modules/aptrepo/files/updates
index e5381fe..0c3fe29 100644
--- a/modules/aptrepo/files/updates
+++ b/modules/aptrepo/files/updates
@@ -157,7 +157,7 @@
 UDebComponents:
 Suite: stable
 Architectures: amd64
-VerifyRelease: 670540C841468433
+VerifyRelease: 28A1C275F9F8725B
 ListShellHook: grep-dctrl -e -P '^confluent-kafka-2\.11' || [ $? -eq 1 ]
 
 Name: docker

-- 
To view, visit https://gerrit.wikimedia.org/r/401598
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ida3ebcda31a267c3147b72f82e0437f7141ad74f
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ottomata 
Gerrit-Reviewer: Ottomata 
Gerrit-Reviewer: jenkins-bot <>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


  1   2   3   4   5   6   7   8   9   10   >