By default, Marathon exposes its web UI on port
8080. This can be configured
via command line flags.
Marathon UI introduces the following concepts to illustrate the possible statuses an app can be in at any point in time:
These statuses are displayed in the UI’s application list view to provide an at-a-glance overview of the global state to the user.
Following is an explanation of how each status is determined based on certain conditions as exposed by the REST API.
The application is reported as being successfully running.
Whenever a change to the application has been requested by the user. Marathon is
performing the required actions, which haven’t completed yet.
True when the
v2/apps endpoint returns a JSON response with
app.deployments.length > 0.
An application with a target instances of 0 and whose running tasks count is 0.
True when the JSON returned looks like:
app.instances === 0 && app.tasksRunning === 0
Additionally, by inspecting the
v2/queue endpoint the following states can
also be determined:
Marathon is waiting for offers from Mesos. True whenever an app has
queueEntry.delay.overdue === true.
An app is considered
delayed whenever too many tasks of the application failed
in a short amount of time. Marathon will pause this deployment and retry later.
True if the
queue endpoint returns the following JSON condition:
queueEntry.delay.overdue === false
It is possible to specify health checks to be run against an application’s tasks. For instructions on how to set up and use health checks, please refer to the health checks page page.
An application’s task lifecycle always falls into one of the following conditions.
A task is considered healthy when all of the supplied health checks are passing.
A task is considered unhealthy whenever one or more health checks are reported as failing.
A task is staged when a launch request has been submitted to the cluster, but has not been reported as running yet. Fetching the specified “fetch” in the app definition or the docker images happens before Mesos reports the task as running.
Additionally, the UI introduces the following concepts.
The health of the task is unknown because no health checks were defined.
Whenever there are more running tasks than the amount of instances requested. This happens, for example, when a rolling restart is performed with a deployment policy that enforces a draining approach (i.e. new tasks are started before the running ones could be destroyed), or when scaling down.
Whenever a task never manages to reach the staged status, e.g. when no matching offer has been received, or Marathon throttles starting new tasks to prevent overwhelming Mesos.