[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-26 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
 ...
{quote}
Instead of the metrics I'm getting an HTML page.  The same happens for all of 
those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 \{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
 \{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
 {{*.sink.prometheusServlet.path=/metrics/prometheus}}
 \{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
 \{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
{{*.sink.prometheusServlet.path=/metrics/prometheus}}
{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}$ sbin/start-master.sh
{{ {{$ sbin/start-slave.sh spark://`hostname`:7077
{{ {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties}} adding this content:
{quote}*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
{{*.sink.prometheusServlet.path=/metrics/prometheus}}
{{ master.sink.prometheusServlet.path=/metrics/master/prometheus}}
{{ applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}
{quote}
Then I ran: 
{quote}$ sbin/start-master.sh
{{ {{$ sbin/start-slave.sh spark://`hostname`:7077
{{ {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties
{quote}
{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}{{$ curl -s [http://localhost:8080/metrics/master/prometheus]}}

{{}}

{{      }}

{{        setUIRoot('')}}

{{        }}

{{        }}

{{        Spark Master at 
spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{      }}

{{      }}

{{        }}

{{          }}

{{            }}

{{              }}

{{                }}

{{                  }}

{{                  3.0.0}}

{{                }}

{{                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077}}

{{              }}

{{            }}

{{          }}

{{          }}

{{          }}

{{            }}

{{              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077}}
{{ ...}}
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging 

[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}{{20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable}}

{{Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties}}

{{Setting default log level to "WARN".}}

{{To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).}}

{{Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]}}

{{Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).}}

{{Spark session available as 'spark'.}}

{{Welcome to}}

{{                    __}}

{{     / __/_   _/ /__}}

{{    _\ \/ _ \/ _ `/ __/  '_/}}

{{   /___/ .__/_,_/_/ /_/_\   version 3.0.0}}

{{      /_/}}

{{         }}

{{Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)}}

{{Type in expressions to have them evaluated.}}

{{Type :help for more information. }}

{{scala>}}
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}$ curl -s [http://localhost:8080/metrics/master/prometheus]



      

        setUIRoot('')

        

        

        Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

      

      

        

          

            

              

                

                  

                  3.0.0

                

                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

              

            

          

          

          

            

              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:
 
{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]


[jira] [Updated] (SPARK-33564) Prometheus metrics for Master and Worker isn't working

2020-11-25 Thread Paulo Roberto de Oliveira Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-33564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Roberto de Oliveira Castro updated SPARK-33564:
-
Description: 
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
\{{metrics.properties __ }}adding this content:
 
{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
 {{*.sink.prometheusServlet.path=/metrics/prometheus
 master.sink.prometheusServlet.path=/metrics/master/prometheus
 applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran: 

{{$ sbin/start-master.sh}}
 {{$ sbin/start-slave.sh spark://`hostname`:7077}}
 {{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

Spark context Web UI available at 
[http://192.168.0.6:4040|http://192.168.0.6:4040/]

Spark context available as 'sc' (master = 
spark://MacBook-Pro-de-Paulo-2.local:7077, app id = app-20201125173618-0002).

Spark session available as 'spark'.

Welcome to

                    __

     / __/_   _/ /__

    _\ \/ _ \/ _ `/ __/  '_/

   /___/ .__/_,_/_/ /_/_\   version 3.0.0

      /_/

         

Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)

Type in expressions to have them evaluated.

Type :help for more information. 

scala>
{quote}
{{And when I try to fetch prometheus metrics for driver, everything works 
fine:}}
{quote}$ curl -s [http://localhost:4040/metrics/prometheus/] | head -n 5

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Number\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_disk_diskSpaceUsed_MB_Value\{type="gauges"}
 0

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Number\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxMem_MB_Value\{type="gauges"}
 732

metrics_app_20201125173618_0002_driver_BlockManager_memory_maxOffHeapMem_MB_Number\{type="gauges"}
 0
{quote}
*The problem appears when I try accessing master metrics*, and I get the 
following problem:
{quote}$ curl -s [http://localhost:8080/metrics/master/prometheus]



      

        setUIRoot('')

        

        

        Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

      

      

        

          

            

              

                

                  

                  3.0.0

                

                Spark Master at spark://MacBook-Pro-de-Paulo-2.local:7077

              

            

          

          

          

            

              URL: 
spark://MacBook-Pro-de-Paulo-2.local:7077
 ...
{quote}
The same happens for all of those here:
{quote}{{$ curl -s [http://localhost:8080/metrics/applications/prometheus/]}}
 {{$ curl -s [http://localhost:8081/metrics/prometheus/]}}
{quote}
Instead, *I expected metrics in prometheus metrics*. All related JSON endpoints 
seem to be working fine.

  was:
Following the [PR|https://github.com/apache/spark/pull/25769] that introduced 
the Prometheus sink, I downloaded the {{spark-3.0.1-bin-hadoop2.7.tgz}}  (also 
tested with 3.0.0), uncompressed the tgz and created a file called 
{{metrics.properties __ }}adding this content:
{{}} 

{{*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet}}
{{*.sink.prometheusServlet.path=/metrics/prometheus
master.sink.prometheusServlet.path=/metrics/master/prometheus
applications.sink.prometheusServlet.path=/metrics/applications/prometheus}}

Then I ran:

 

{{$ sbin/start-master.sh}}
{{$ sbin/start-slave.sh spark://`hostname`:7077}}
{{$ bin/spark-shell --master spark://`hostname`:7077 
--files=./metrics.properties --conf spark.metrics.conf=./metrics.properties}}

{{The Spark shell opens without problems:}}

{{}}
{quote}20/11/25 17:36:07 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

{{}}

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

{{}}

Setting default log level to "WARN".

{{}}

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).

{{}}

Spark context Web UI available at http://192.168.0.6:4040

{{}}

Spark context available as 'sc' (master =