[jira] [Reopened] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-12-04 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro reopened SPARK-33564:
--

Just to understand, the configuration needs to be set up before 
start-master.sh? If that is the case, how do I change metrics configs between 
applications? Is it possible to run one application using PrometheusServlet and 
another application to use a different sink on this same cluster?

Also, is there documentation about the subject? Because nowhere it is mentioned 
that the conf should be set before starting the cluster.

Final question: how to achieve this using YARN? Do I have to have the metrics 
config set before launching YARN?

> Prometheus metrics for Master and Worker isn't working 
> ---
>
> Key: SPARK-33564
> URL: https://issues.apache.org/jira/browse/SPARK-33564
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Spark Shell
>Affects Versions: 3.0.0, 3.0.1
>Reporter: Paulo Roberto de Oliveira Castro
>Priority: Major
>  Labels: Metrics, metrics, prometheus
>
> Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
> the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  
> (also tested with 3.0.0), uncompressed the tgz and created a file called 
> {{metrics.properties}} adding this content:
> {quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
>  {{*.sink.prometheusServlet.path=/metrics/prometheus}}
>  master.sink.prometheusServlet.path=/metrics/master/prometheus
>  applications.sink.prometheusServlet.path=/metrics/applications/prometheus
> {quote}
> Then I ran: 
> {quote}{{$ sbin/start-master.sh}}
>  {{$ sbin/start-slave.sh spark://`hostname`:7077}}
>  {{$ bin/spark-shell --master spark://`hostname`:7077 
> --files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
> {quote}
> {{The Spark shell opens without problems:}}
> {quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable}}
> {{Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties}}
> {{Setting default log level to "WARN".}}
> {{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).}}
> {{Spark context Web UI available at 
> [http://192.168.0.6:4040|http://192.168.0.6:4040/]}}
> {{Spark context available as 'sc' (master = 
> spark://MacBook-Pro-de-Paulo-2.local:7077, app id = 
> app-20201125173618-0002).}}
> {{Spark session available as 'spark'.}}
> {{Welcome to}}
> {{                    __}}
> {{     / __/_   _/ /__}}
> {{    _\ \/ _ \/ _ `/ __/  '_/}}
> {{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}
> {{      /_/}}
> {{         }}
> {{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}
> {{Type in expressions to have them evaluated.}}
> {{Type :help for more information. }}
> {{scala>}}
> {quote}
> {{And when I try to fetch prometheus metrics for driver, everything works 
> fine:}}
> {quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
>  0
> {quote}
> *The problem appears when I try accessing master metrics*, and I get the 
> following problem:
> {quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}
> {{}}
> {{      }}
> {{         type="text/css"/> href="/static/vis-timeline-graph2d.min.css" type="text/css"/> rel="stylesheet" href="/static/webui.css" type="text/css"/> rel="stylesheet" href="/static/timeline-view.css" type="text/css"/> src="/static/sorttable.js"> src="/static/jquery-3.4.1.min.js"> src="/static/vis-timeline-graph2d.min.js"> src="/static/bootstrap-tooltip.js"> src="/static/initialize-tooltips.js"> src="/static/table.js"> src="/static/timeline-view.js"> src="/static/log-view.js"> src="/static/webui.js">setUIRoot('')}}
> {{        }}
> {{         href="/static/spark-logo-77x50px-hd.png">}}
> {{        Spark Master at 
> spark://MacBook-Pro-de-Paulo-2.local:7077}}
> {{      }}
> {{      }}
> {{        }}
> {{          }}
> {{            }}
> {{              }}
> {{                }}
> {{                  }}
> {{                  3.0.0}}
> {{                }}
> {{                Spark Master 

[jira] [Issue Comment Deleted] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-12-04 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Comment: was deleted

(was: Just to understand, the configuration needs to be set up before 
start-master.sh? If that is the case, how do I change metrics configs between 
applications? Is it possible to run one application using PrometheusServlet and 
another application to use a different sink on this same cluster?

Also, is there documentation about the subject? Because nowhere it is mentioned 
that the conf should be set before starting the cluster.

Final question: how to achieve this using YARN? Do I have to have the metrics 
config set before launching YARN?)

> Prometheus metrics for Master and Worker isn't working 
> ---
>
> Key: SPARK-33564
> URL: https://issues.apache.org/jira/browse/SPARK-33564
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Spark Shell
>Affects Versions: 3.0.0, 3.0.1
>Reporter: Paulo Roberto de Oliveira Castro
>Priority: Major
>  Labels: Metrics, metrics, prometheus
>
> Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
> the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  
> (also tested with 3.0.0), uncompressed the tgz and created a file called 
> {{metrics.properties}} adding this content:
> {quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
>  {{*.sink.prometheusServlet.path=/metrics/prometheus}}
>  master.sink.prometheusServlet.path=/metrics/master/prometheus
>  applications.sink.prometheusServlet.path=/metrics/applications/prometheus
> {quote}
> Then I ran: 
> {quote}{{$ sbin/start-master.sh}}
>  {{$ sbin/start-slave.sh spark://`hostname`:7077}}
>  {{$ bin/spark-shell --master spark://`hostname`:7077 
> --files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
> {quote}
> {{The Spark shell opens without problems:}}
> {quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable}}
> {{Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties}}
> {{Setting default log level to "WARN".}}
> {{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).}}
> {{Spark context Web UI available at 
> [http://192.168.0.6:4040|http://192.168.0.6:4040/]}}
> {{Spark context available as 'sc' (master = 
> spark://MacBook-Pro-de-Paulo-2.local:7077, app id = 
> app-20201125173618-0002).}}
> {{Spark session available as 'spark'.}}
> {{Welcome to}}
> {{                    __}}
> {{     / __/_   _/ /__}}
> {{    _\ \/ _ \/ _ `/ __/  '_/}}
> {{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}
> {{      /_/}}
> {{         }}
> {{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}
> {{Type in expressions to have them evaluated.}}
> {{Type :help for more information. }}
> {{scala>}}
> {quote}
> {{And when I try to fetch prometheus metrics for driver, everything works 
> fine:}}
> {quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
>  0
> {quote}
> *The problem appears when I try accessing master metrics*, and I get the 
> following problem:
> {quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}
> {{}}
> {{      }}
> {{         type="text/css"/> href="/static/vis-timeline-graph2d.min.css" type="text/css"/> rel="stylesheet" href="/static/webui.css" type="text/css"/> rel="stylesheet" href="/static/timeline-view.css" type="text/css"/> src="/static/sorttable.js"> src="/static/jquery-3.4.1.min.js"> src="/static/vis-timeline-graph2d.min.js"> src="/static/bootstrap-tooltip.js"> src="/static/initialize-tooltips.js"> src="/static/table.js"> src="/static/timeline-view.js"> src="/static/log-view.js"> src="/static/webui.js">setUIRoot('')}}
> {{        }}
> {{         href="/static/spark-logo-77x50px-hd.png">}}
> {{        Spark Master at 
> spark://MacBook-Pro-de-Paulo-2.local:7077}}
> {{      }}
> {{      }}
> {{        }}
> {{          }}
> {{            }}
> {{              }}
> {{                }}
> {{                  }}
> {{                  3.0.0}}
> {{                }}
> 

[jira] [Commented] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-12-04 Thread Paulo Roberto de Oliveira Castro (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244182#comment-17244182
 ] 

Paulo Roberto de Oliveira Castro commented on SPARK-33564:
--

Just to understand, the configuration needs to be set up before 
start-master.sh? If that is the case, how do I change metrics configs between 
applications? Is it possible to run one application using PrometheusServlet and 
another application to use a different sink on this same cluster?

Also, is there documentation about the subject? Because nowhere it is mentioned 
that the conf should be set before starting the cluster.

Final question: how to achieve this using YARN? Do I have to have the metrics 
config set before launching YARN?

> Prometheus metrics for Master and Worker isn't working 
> ---
>
> Key: SPARK-33564
> URL: https://issues.apache.org/jira/browse/SPARK-33564
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Spark Shell
>Affects Versions: 3.0.0, 3.0.1
>Reporter: Paulo Roberto de Oliveira Castro
>Priority: Major
>  Labels: Metrics, metrics, prometheus
>
> Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
> the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  
> (also tested with 3.0.0), uncompressed the tgz and created a file called 
> {{metrics.properties}} adding this content:
> {quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
>  {{*.sink.prometheusServlet.path=/metrics/prometheus}}
>  master.sink.prometheusServlet.path=/metrics/master/prometheus
>  applications.sink.prometheusServlet.path=/metrics/applications/prometheus
> {quote}
> Then I ran: 
> {quote}{{$ sbin/start-master.sh}}
>  {{$ sbin/start-slave.sh spark://`hostname`:7077}}
>  {{$ bin/spark-shell --master spark://`hostname`:7077 
> --files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
> {quote}
> {{The Spark shell opens without problems:}}
> {quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable}}
> {{Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties}}
> {{Setting default log level to "WARN".}}
> {{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).}}
> {{Spark context Web UI available at 
> [http://192.168.0.6:4040|http://192.168.0.6:4040/]}}
> {{Spark context available as 'sc' (master = 
> spark://MacBook-Pro-de-Paulo-2.local:7077, app id = 
> app-20201125173618-0002).}}
> {{Spark session available as 'spark'.}}
> {{Welcome to}}
> {{                    __}}
> {{     / __/_   _/ /__}}
> {{    _\ \/ _ \/ _ `/ __/  '_/}}
> {{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}
> {{      /_/}}
> {{         }}
> {{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}
> {{Type in expressions to have them evaluated.}}
> {{Type :help for more information. }}
> {{scala>}}
> {quote}
> {{And when I try to fetch prometheus metrics for driver, everything works 
> fine:}}
> {quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
>  0
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
>  732
> metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
>  0
> {quote}
> *The problem appears when I try accessing master metrics*, and I get the 
> following problem:
> {quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}
> {{}}
> {{      }}
> {{         type="text/css"/> href="/static/vis-timeline-graph2d.min.css" type="text/css"/> rel="stylesheet" href="/static/webui.css" type="text/css"/> rel="stylesheet" href="/static/timeline-view.css" type="text/css"/> src="/static/sorttable.js"> src="/static/jquery-3.4.1.min.js"> src="/static/vis-timeline-graph2d.min.js"> src="/static/bootstrap-tooltip.js"> src="/static/initialize-tooltips.js"> src="/static/table.js"> src="/static/timeline-view.js"> src="/static/log-view.js"> src="/static/webui.js">setUIRoot('')}}
> {{        }}
> {{         href="/static/spark-logo-77x50px-hd.png">}}
> {{        Spark Master at 
> spark://MacBook-Pro-de-Paulo-2.local:7077}}
> {{      }}
> {{      }}
> {{        }}
> {{          }}
> {{            }}
> {{              }}
> {{                }}
> {{                  }}
> {{                  3.0.0}}
> {{              

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-26 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
 ...
{quote}
Instead of the metrics I'm getting an HTML page.  The same happens for all of 
those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 \{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
 \{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 \{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
 \{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
{{*.sink.prometheusServlet.path=/metrics/prometheus}}
{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}$ sbin/start-master.sh
{{ {{$ sbin/start-slave.sh spark://`hostname`:7077
{{ {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
{{*.sink.prometheusServlet.path=/metrics/prometheus}}
{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}$ sbin/start-master.sh
{{ {{$ sbin/start-slave.sh spark://`hostname`:7077
{{ {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
{{ ...}}
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}$ curl -s [http://localhost:8080/metrics/master/prometheus]



      

        setUIRoot('')

        

        

        Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

      

      

        

          

            

              

                

                  

                  3.0.0

                

                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

              

            

          

          

          

            

              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:
 
{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]


[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:
 
{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]

Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).

Spark session available as 'spark'.

Welcome to

                    __

     / __/_   _/ /__

    _\ \/ _ \/ _ `/ __/  '_/

   /___/ .__/_,_/_/ /_/_\   version 3.0.0

      /_/

         

Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)

Type in expressions to have them evaluated.

Type :help for more information. 

scala>
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}$ curl -s [http://localhost:8080/metrics/master/prometheus]



      

        setUIRoot('')

        

        

        Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

      

      

        

          

            

              

                

                  

                  3.0.0

                

                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

              

            

          

          

          

            

              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties __ }}adding this content:
{{}} 

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
{{*.sink.prometheusServlet.path=/metrics/prometheus
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran:

 

{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}

{{}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

{{}}

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

{{}}

Setting default log level to "WARN".

{{}}

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

{{}}

Spark context Web UI available at http://192.168.0.6:4040

{{}}

Spark context available as 'sc' (master = 

[jira] [Created] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)
Paulo Roberto de Oliveira Castro created SPARK-33564:


 Summary: Prometheus metrics for Master and Worker isn't working 
 Key: SPARK-33564
 URL: https://issues.apache.org/jira/browse/SPARK-33564
 Project: Spark
  Issue Type: Bug
  Components: Spark Core, Spark Shell
Affects Versions: 3.0.1, 3.0.0
Reporter: Paulo Roberto de Oliveira Castro


Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties __ }}adding this content:
{{}} 

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
{{*.sink.prometheusServlet.path=/metrics/prometheus
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran:

 

{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}

{{}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

{{}}

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

{{}}

Setting default log level to "WARN".

{{}}

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

{{}}

Spark context Web UI available at http://192.168.0.6:4040

{{}}

Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).

{{}}

Spark session available as 'spark'.

{{}}

Welcome to

{{}}

                    __

{{}}

     / __/__  ___ _/ /__

{{}}

    _\ \/ _ \/ _ `/ __/  '_/

{{}}

   /___/ .__/\_,_/_/ /_/\_\   version 3.0.0

{{}}

      /_/

{{}}

         

{{}}

Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)

{{}}

Type in expressions to have them evaluated.

{{}}

Type :help for more information.

{{}}

 

{{}}

scala>
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s http://localhost:4040/metrics/prometheus/ | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}

*The problem appears when I try accessing master metrics*, and I get the 
following problem:


{quote}$ curl -s http://localhost:8080/metrics/master/prometheus




      

        setUIRoot('')

        

        

        Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

      

      

        

          

            

              

                

                  

                  3.0.0

                

                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

              

            

          

          

          

            

              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077
...
{quote}
The same happens for all of those here:

{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
{{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org