Out-of-band configuration modifications are not supported. The service’s core responsibility is to deploy and maintain the service with a specified configuration. In order to do this, the service assumes that it has ownership of task configuration. If an end-user makes modifications to individual tasks through out-of-band configuration operations, the service will override those modifications at a later time. For example:
- If a task crashes, it will be restarted with the configuration known to the scheduler, not one modified out-of-band.
- If a configuration update is initiated, all out-of-band modifications will be overwritten during the rolling update.
To prevent accidental data loss, the service does not support reducing the number of pods.
To prevent accidental data loss from reallocation, the service does not support changing volume requirements after initial deployment.
If your cluster does not have enough resources to deploy the service as requested, the initial deployment will not complete until either those resources are available or until you reinstall the service with corrected resource requirements. Similarly, scale-outs following initial deployment will not complete if the cluster does not have the needed available resources to complete the scale-out.
When the service is deployed on a virtual network, the service may not be switched to host networking without a full re-installation. The same is true for attempting to switch from host to virtual networking.
Task Environment Variables
Each service task has some number of environment variables, which are used to configure the task. These environment variables are set by the service scheduler. While it is possible to use these environment variables in adhoc scripts (e.g. via
dcos task exec), the name of a given environment variable may change between versions of a service and should not be considered a public API of the service.
If the service is deployed with a Zone constraint it may not be removed after initial installation.
Additionally, if the service was deployed without a Zone constraint, it may not have one added after initial installation.
- Multiple instances on a host is not supported in production.
- Stopping or restarting a node from OpsCenter is not supported. Use
dcos pod restartto restart nodes from the DC/OS CLI.
- A single OpsCenter cannot manage multiple clusters, but can manage multiple DCs in the same cluster.
- A node will restart if its associated Agent process crashes.
- Point-in-time restore functionality through the OpsCenter UI is not supported.
Automatic failed node recovery
nodetool removenode call is currently required when replacing nodes. This is planned to be automated in a future release.
Nodes are not automatically replaced by the service in the event a system goes down. You may either manually replace nodes or build your own ruleset and automation to perform this operation automatically.
Rack awareness within the DC/OS Service is not currently supported, but is planned to be supported with a future release of DC/OS.