The `py-spark` framework looks to be driver based i.e. it uses the
`MesosSchedulerDriver` underneath. You would need to use the `/teardown`
endpoint that takes in the `frameworkId`as a query parameter for tearing it
down. For more details, see:
That's not the endpoint you want (that's for frameworks to use). You want
/teardown endpoint (that's for operators).
We're getting the highlighted error message returned when attempting to
tear down a framework on our cluster:
june@cluster:~$ mesos frameworks
ID NAMEHOST
ACTIVE TASKS CPU MEM DISK
3 matches
Mail list logo