drafted, but still missing instructions on how to obtain/run system-tools

This commit is contained in:
Mark Bishop
2018-09-20 15:29:35 -07:00
parent 70792f049e
commit b8f2851816
4 changed files with 45 additions and 90 deletions
@@ -1,8 +1,11 @@
---
title: Removing Rancher
weight: 5000
draft: true
---
When you deploy Rancher and use it to provision clusters, Rancher installs its components on the nodes you use. This section features instructions on how to clean Rancher's components from your nodes that you no longer want to use with Rancher.
There are two contexts in which you'd remove Rancher from a Kubernetes cluster node.
- [Removing Rancher from Your Rancher Cluster Nodes]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/)
@@ -4,9 +4,11 @@ weight: 375
aliases:
- /rancher/v2.x/en/installation/removing-rancher/cleaning-cluster-nodes/
- /rancher/v2.x/en/installation/removing-rancher/
- /rancher/v2.x/en/faq/cleaning-cluster-nodes
- /rancher/v2.x/en/faq/cleaning-cluster-nodes/
---
When adding a node to a cluster, resources (containers/(virtual) network interfaces) and configuration items (certificates/configuration files) are created. When removing a node from a cluster (if it is in `Active` state), those resources will be automatically cleaned and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
When you deploy Rancher to the Kubernetes nodes that host your [Rancher installation]({{< baseurl >}}/rancher/v2.x/en/installation/), resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created.
When removing nodes from your installation cluster (provided that they are in `Active` state), those resources automatically cleaned, and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
## Removing A Node from a Cluster by Rancher UI
@@ -142,7 +144,7 @@ ip address show
ifconfig -a
```
*To remove an interface:*
**To remove an interface:**
```
ip link delete interface_name
@@ -7,23 +7,14 @@ draft: true
When you no longer have use for Rancher in a cluster that you've [provisioned using Rancher]({{< baseurl >}}rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher), and you want to remove Rancher from its nodes, follow one of the sets of instructions below based on your [cluster type]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options). The method you'll use to remove Rancher changes based on the type of cluster.
## Hosted Kubernetes Providers
To remove Rancher from , simply delete them from Rancher. The cluster will remove Rancher components through the Norman API (Rancher's API framework).
<!-- MB 9/19: I know this is probably BS, but I need to confirm with a dev on how to remove Rancher from a hosted cluster -->
## Nodes Launched by RKE / Nodes Hosted by a Provider
For clusters nodes provisioned using the following options, you can remove Rancher by downloading and running the Rancher [system-tools](https://github.com/rancher/system-tools/releases):
- [Nodes hosted by an IaaS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#node-pools)
- [Custom nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#custom-nodes)
- [Nodes hosted by a Kubernetes Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#hosted-kubernetes-cluster)
For clusters nodes provisioned using [RKE](({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)) or a [hosted Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#hosted-kubernetes-cluster), you can remove Rancher by downloading and running the Rancher [system-tools](https://github.com/rancher/system-tools/releases):
### Using the System-Tool
System-tool is a utility that cleans up rancher projects. In this use case, it will help you remove the Rancher management plane from your cluster nodes.
System-tools is a utility that cleans up Rancher. In this use case, it will help you remove the Rancher from your cluster nodes.
#### Usage
@@ -33,6 +24,10 @@ System-tool is a utility that cleans up rancher projects. In this use case, it w
system-tools remove [command options] [arguments...]
```
<br/>
When you run this command, the components listed in [What Gets Removed?](#what-gets-removed) are deleted.
##### Options
| Option | Description |
@@ -41,30 +36,27 @@ system-tools remove [command options] [arguments...]
| `--namespace <NAMESPACE>, -n cattle-system` | Rancher 2.x deployment namespace (`<NAMESPACE>`). If no namespace is defined, the options defaults to `cattle-system`. |
| `--force` | Skips the the interactive removal confirmation and removes the Rancher deployment without prompt. |
## Imported Cluster
## Imported Cluster Nodes
For imported clusters, the process for removing Rancher from its nodes is a little different. You can the option of simply deleting the cluster in the Rancher UI, or your can run a script that removes Rancher components from the nodes. Both options make the same deletions.
{{% tabs %}}
{{% tab "By UI / API" %}}
After you initiate the removal of an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur.
>**Warning:** This process will remove data from your nodes. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur.
1. Rancher creates a `serviceAccount` that it uses to remove the cluster. This account is assigned the [clusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [clusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) permissions, which are required to remove the cluster.
1. Using the `serviceAccount`, Rancher schedules and runs a [job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that cleans the Rancher and Kubernetes components off of the node. This job also references the `serviceAccount` and its roles as dependencies, so the job deletes them before its completion. This process:
- Removes the `cattle-system` namespace from the cluster.
- Removes the `serviceAccount`, `clusterRole`, and `clusterRole` resources.
- Cleans up all remaining namespaces in the cluster (i.e., removes finalizers, annotations, and labels).
1. Using the `serviceAccount`, Rancher schedules and runs a [job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that cleans the Rancher and Kubernetes components off of the node. This job also references the `serviceAccount` and its roles as dependencies, so the job deletes them before its completion.
>**Using 2.0.7 or Earlier?**
>
>These versions of Rancher do not automatically delete the `serviceAccount`, `clusterRole`, and `clusterRole` resources after the job runs. You'll have to delete them yourself.
1. Rancher is removed from the cluster nodes. However, the cluster persists, running the native version of Kubernetes.
**Result:** All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
{{% /tab %}}
{{% tab "By Script" %}}
Rather than cleaning
Rather than cleaning imported cluster nodes using the Rancher UI, you can run a script instead.
>**Prerequisite:**
>
@@ -94,11 +86,30 @@ Rather than cleaning
./user-cluster.sh rancher/agent:latest
```
**Result:** The script runs. All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
{{% /tab %}}
{{% /tabs %}}
## What Gets Removed?
When cleaning nodes provisioned using Rancher, the following components are deleted based on the type of cluster node you're removing.
| Removed Component | [IaaS Nodes][1] | [Custom Nodes][2] | [Hosted Cluster][3] | [Imported Nodes][4] |
| ------------------------------------------------------------------------------ | --------------- | ----------------- | ------------------- | ------------------- |
| The Rancher deployment namespace (`cattle-system` by default) | ✓ | ✓ | ✓ | ✓ |
| `serviceAccount`, `clusterRoles`, and `clusterRoleBindings` labeled by Rancher | ✓ | ✓ | ✓ | ✓ |
| Labels, Annotations, and Finalizers | ✓ | ✓ | ✓ | ✓ |
| Rancher Deployment | ✓ | ✓ | ✓ | |
| Machines, clusters, projects, and user custom resource deployments (CRDs) | ✓ | ✓ | ✓ | |
| All resources create under the `management.cattle.io` API Group | ✓ | ✓ | ✓ | |
| All CRDs created by Rancher v2.0.x | ✓ | ✓ | ✓ | |
[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/
[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/
[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/
[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/
>**Using 2.0.7 or Earlier?**
>
>These versions of Rancher do not automatically delete the `serviceAccount`, `clusterRole`, and `clusterRole` resources after the job runs. You'll have to delete them yourself.
@@ -1,61 +0,0 @@
#!/bin/bash
# set -x
set -e
# Location of the yaml to use to deploy the cleanup job
yaml_url=https://raw.githubusercontent.com/rancher/rancher/master/cleanup/user-cluster.yml
# 120 is equal to a minute as the sleep is half a second
timeout=120
# Agent image to use in the yaml file
agent_image="$1"
show_usage() {
echo -e "Usage: $0 [AGENT_IMAGE] [FLAGS]"
echo "AGENT_IMAGE is a required argument"
echo ""
echo "Flags:"
echo -e "\t-dry-run Display the resources that would will be updated without making changes"
}
if [ $# -lt 1 ]
then
show_usage
exit 1
fi
if [[ $1 == "-h" ||$1 == "--help" ]]
then
show_usage
exit 0
fi
# Pull the yaml and replace the agent_image holder with the passed in image
yaml=$(curl --insecure -sfL $yaml_url | sed -e 's=agent_image='"$agent_image"'=')
if [ "$2" = "-dry-run" ]
then
# Uncomment the env var for dry-run mode
yaml=$(sed -e 's/# // ' <<< "$yaml")
fi
echo "$yaml" | kubectl --kubeconfig ~/development/kube_config_cluster.yml apply -f -
# Get the pod ID to tail the logs
pod_id=$(kubectl --kubeconfig ~/development/kube_config_cluster.yml get pod -l job-name=cattle-cleanup-job -o jsonpath="{.items[0].metadata.name}")
declare -i count=0
until kubectl --kubeconfig ~/development/kube_config_cluster.yml logs $pod_id -f
do
if [ $count -gt $timeout ]
then
echo "Timout reached, check the job by running kubectl get jobs"
exit 1
fi
sleep 0.5
count+=1
done
# Cleanup after it completes successfully
echo "$yaml" | kubectl --kubeconfig ~/development/kube_config_cluster.yml delete -f -