Remove unneeded intermediate folders

This commit is contained in:
Billy Tat
2022-08-17 10:23:03 -07:00
parent 506e174643
commit 07355d1446
1146 changed files with 0 additions and 0 deletions
+72
View File
@@ -0,0 +1,72 @@
---
title: FAQ
weight: 25
aliases:
- /rancher/v2.0-v2.4/en/about/
---
This FAQ is a work in progress designed to answers the questions our users most frequently ask about Rancher v2.x.
See [Technical FAQ]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/technical/), for frequently asked technical questions.
<br/>
**Does Rancher v2.x support Docker Swarm and Mesos as environment types?**
When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm.
<br/>
**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?**
Yes.
<br/>
**Does Rancher support Windows?**
As of Rancher 2.3.0, we support Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/windows-clusters/)
<br/>
**Does Rancher support Istio?**
As of Rancher 2.3.0, we support [Istio.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/tools/istio/)
Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/)
<br/>
**Will Rancher v2.x support Hashicorp's Vault for storing secrets?**
Secrets management is on our roadmap but we haven't assigned it to a specific release yet.
<br/>
**Does Rancher v2.x support RKT containers as well?**
At this time, we only support Docker.
<br/>
**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and imported Kubernetes?**
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave (Weave is available as of v2.2.0). Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
<br/>
**Are you planning on supporting Traefik for existing setups?**
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
<br/>
**Can I import OpenShift Kubernetes clusters into v2.x?**
Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
<br/>
**Are you going to integrate Longhorn?**
Yes. Longhorn was on a bit of a hiatus while we were working on v2.0. We plan to re-engage on the project.
@@ -0,0 +1,30 @@
---
title: Installing and Configuring kubectl
weight: 100
---
`kubectl` is a CLI utility for running commands against Kubernetes clusters. It's required for many maintenance and administrative tasks in Rancher 2.x.
### Installation
See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for installation on your operating system.
### Configuration
When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_rancher-cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`.
```
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
```
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
```
@@ -0,0 +1,154 @@
---
title: Container Network Interface (CNI) Providers
description: Learn about Container Network Interface (CNI), the CNI providers Rancher provides, the features they offer, and how to choose a provider for you
weight: 2300
---
## What is CNI?
CNI (Container Network Interface), a [Cloud Native Computing Foundation project](https://cncf.io/), consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.
![CNI Logo]({{<baseurl>}}/img/rancher/cni-logo.png)
For more information visit [CNI GitHub project](https://github.com/containernetworking/cni).
### What Network Models are Used in CNI?
CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)).
#### What is an Encapsulated Network?
This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.
In simple terms, this network model generates a kind of network bridge extended between Kubernetes workers, where pods are connected.
This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation.
CNI network providers using this network model include Flannel, Canal, and Weave.
![Encapsulated Network]({{<baseurl>}}/img/rancher/encapsulated-network.png)
#### What is an Unencapsulated Network?
This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol).
In simple terms, this network model generates a kind of network router extended between Kubernetes workers, which provides information about how to reach pods.
This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency.
CNI network providers using this network model include Calico and Romana.
![Unencapsulated Network]({{<baseurl>}}/img/rancher/unencapsulated-network.png)
### What CNI Providers are Provided by Rancher?
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave (Weave is available as of v2.2.0). You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
#### Canal
![Canal Logo]({{<baseurl>}}/img/rancher/canal-logo.png)
Canal is a CNI network provider that gives you the best of Flannel and Calico. It allows users to easily deploy Calico and Flannel networking together as a unified networking solution, combining Calicos network policy enforcement with the rich superset of Calico (unencapsulated) and/or Flannel (encapsulated) network connectivity options.
In Rancher, Canal is the default CNI network provider combined with Flannel and VXLAN encapsulation.
Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). For details, refer to [the port requirements for user clusters.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/node-requirements/)
{{< img "/img/rancher/canal-diagram.png" "Canal Diagram">}}
For more information, see the [Canal GitHub Page.](https://github.com/projectcalico/canal)
#### Flannel
![Flannel Logo]({{<baseurl>}}/img/rancher/flannel-logo.png)
Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan).
Encapsulated traffic is unencrypted by default. Therefore, flannel provides an experimental backend for encryption, [IPSec](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers.
Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
![Flannel Diagram]({{<baseurl>}}/img/rancher/flannel-diagram.png)
For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel).
#### Calico
![Calico Logo]({{<baseurl>}}/img/rancher/calico-logo.png)
Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP.
Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.
Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
![Calico Diagram]({{<baseurl>}}/img/rancher/calico-diagram.svg)
For more information, see the following pages:
- [Project Calico Official Site](https://www.projectcalico.org/)
- [Project Calico GitHub Page](https://github.com/projectcalico/calico)
#### Weave
![Weave Logo]({{<baseurl>}}/img/rancher/weave-logo.png)
_Available as of v2.2.0_
Weave enables networking and network policy in Kubernetes clusters across the cloud. Additionally, it support encrypting traffic between the peers.
Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` and UDP port `6784` (data ports). See the [port requirements for user clusters]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
For more information, see the following pages:
- [Weave Net Official Site](https://www.weave.works/)
### CNI Features by Provider
The following table summarizes the different features available for each CNI network provider provided by Rancher.
| Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | No | Yes |
| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | No | No |
| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | No | Yes |
| Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes |
- Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni)
- Route Distribution: An exterior gateway protocol designed to exchange routing and reachability information on the Internet. BGP can assist with pod-to-pod networking between clusters. This feature is a must on unencapsulated CNI network providers, and it is typically done by BGP. If you plan to build clusters split across network segments, route distribution is a feature that's nice-to-have.
- Network Policies: Kubernetes offers functionality to enforce rules about which services can communicate with each other using network policies. This feature is stable as of Kubernetes v1.7 and is ready to use with certain networking plugins.
- Mesh: This feature allows service-to-service networking communication between distinct Kubernetes clusters.
- External Datastore: CNI network providers with this feature need an external datastore for its data.
- Encryption: This feature allows cyphered and secure network control and data planes.
- Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications.
#### CNI Community Popularity
The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2020.
| Provider | Project | Stars | Forks | Contributors |
| ---- | ---- | ---- | ---- | ---- |
| Canal | https://github.com/projectcalico/canal | 614 | 89 | 19 |
| flannel | https://github.com/coreos/flannel | 4977 | 1.4k | 140 |
| Calico | https://github.com/projectcalico/calico | 1534 | 429 | 135 |
| Weave | https://github.com/weaveworks/weave/ | 5737 | 559 | 73 |
<br/>
### Which CNI Provider Should I Use?
It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs.
As of Rancher v2.0.7, Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking.
### How can I configure a CNI network provider?
Please see [Cluster Options]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) and the options for [Network Plug-ins]({{<baseurl>}}/rke/latest/en/config-options/add-ons/network-plugins/).
@@ -0,0 +1,9 @@
---
title: Networking
weight: 8005
---
Networking FAQ's
- [CNI Providers]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/networking/cni-providers/)
@@ -0,0 +1,57 @@
---
title: Rancher is No Longer Needed
weight: 8010
aliases:
- /rancher/v2.0-v2.4/en/installation/removing-rancher/cleaning-cluster-nodes/
- /rancher/v2.0-v2.4/en/installation/removing-rancher/
- /rancher/v2.0-v2.4/en/admin-settings/removing-rancher/
- /rancher/v2.0-v2.4/en/admin-settings/removing-rancher/rancher-cluster-nodes/
---
This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted.
- [If the Rancher server is deleted, what happens to the workloads in my downstream clusters?](#if-the-rancher-server-is-deleted-what-happens-to-the-workloads-in-my-downstream-clusters)
- [If the Rancher server is deleted, how do I access my downstream clusters?](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
- [What if I don't want Rancher anymore?](#what-if-i-don-t-want-rancher-anymore)
- [What if I don't want my imported cluster managed by Rancher?](#what-if-i-don-t-want-my-imported-cluster-managed-by-rancher)
- [What if I don't want my RKE cluster or hosted Kubernetes cluster managed by Rancher?](#what-if-i-don-t-want-my-rke-cluster-or-hosted-kubernetes-cluster-managed-by-rancher)
### If the Rancher server is deleted, what happens to the workloads in my downstream clusters?
If Rancher is ever deleted or unrecoverable, all workloads in the downstream Kubernetes clusters managed by Rancher will continue to function as normal.
### If the Rancher server is deleted, how do I access my downstream clusters?
The capability to access a downstream cluster without Rancher depends on the type of cluster and the way that the cluster was created. To summarize:
- **Imported clusters:** The cluster will be unaffected and you can access the cluster using the same methods that you did before the cluster was imported into Rancher.
- **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials.
- **RKE clusters:** To access an [RKE cluster,]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/) the cluster must have the [authorized cluster endpoint]({{<baseurl>}}/rancher/v2.0-v2.4/en/overview/architecture/#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.]({{<baseurl>}}/rancher/v2.0-v2.4/en/overview/architecture/#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
### What if I don't want Rancher anymore?
If you [installed Rancher on a Kubernetes cluster,]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/) remove Rancher by using the [System Tools]({{<baseurl>}}/rancher/v2.0-v2.4/en/system-tools/) with the `remove` subcommand.
If you installed Rancher with Docker, you can uninstall Rancher by removing the single Docker container that it runs in.
Imported clusters will not be affected by Rancher being removed. For other types of clusters, refer to the section on [accessing downstream clusters when Rancher is removed.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
### What if I don't want my imported cluster managed by Rancher?
If an imported cluster is deleted from the Rancher UI, the cluster is detached from Rancher, leaving it intact and accessible by the same methods that were used to access it before it was imported into Rancher.
To detach the cluster,
1. From the **Global** view in Rancher, go to the **Clusters** tab.
2. Go to the imported cluster that should be detached from Rancher and click **&#8942; > Delete.**
3. Click **Delete.**
**Result:** The imported cluster is detached from Rancher and functions normally outside of Rancher.
### What if I don't want my RKE cluster or hosted Kubernetes cluster managed by Rancher?
At this time, there is no functionality to detach these clusters from Rancher. In this context, "detach" is defined as the ability to remove Rancher components from the cluster and manage access to the cluster independently of Rancher.
The capability to manage these clusters without Rancher is being tracked in this [issue.](https://github.com/rancher/rancher/issues/25234)
For information about how to access clusters if the Rancher server is deleted, refer to [this section.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
@@ -0,0 +1,15 @@
---
title: Security
weight: 8007
---
**Is there a Hardening Guide?**
The Hardening Guide is now located in the main [Security]({{<baseurl>}}/rancher/v2.0-v2.4/en/security/) section.
<br/>
**What are the results of Rancher's Kubernetes cluster when it is CIS benchmarked?**
We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security]({{<baseurl>}}/rancher/v2.0-v2.4/en/security/) section.
@@ -0,0 +1,196 @@
---
title: Technical
weight: 8006
---
### How can I reset the administrator password?
Docker Install:
```
$ docker exec -ti <container_id> reset-password
New password for default administrator (user-xxxxx):
<new_password>
```
Kubernetes install (Helm):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
New password for default administrator (user-xxxxx):
<new_password>
```
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.0-v2.4/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
Kubernetes install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- reset-password
New password for default administrator (user-xxxxx):
<new_password>
```
### I deleted/deactivated the last admin, how can I fix it?
Docker Install:
```
$ docker exec -ti <container_id> ensure-default-admin
New default administrator (user-xxxxx)
New password for default administrator (user-xxxxx):
<new_password>
```
Kubernetes install (Helm):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
New password for default administrator (user-xxxxx):
<new_password>
```
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.0-v2.4/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
Kubernetes install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- ensure-default-admin
New password for default admin user (user-xxxxx):
<new_password>
```
### How can I enable debug logging?
See [Troubleshooting: Logging]({{<baseurl>}}/rancher/v2.0-v2.4/en/troubleshooting/logging/)
### My ClusterIP does not respond to ping
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
### Where can I manage Node Templates?
Node Templates can be accessed by opening your account menu (top right) and selecting `Node Templates`.
### Why is my Layer-4 Load Balancer in `Pending` state?
The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/options/cloud-providers/) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
### Where is the state of Rancher stored?
- Docker Install: in the embedded etcd of the `rancher/rancher` container, located at `/var/lib/rancher`.
- Kubernetes install: in the etcd of the RKE cluster created to run Rancher.
### How are the supported Docker versions determined?
We follow the validated Docker versions for upstream Kubernetes releases. The validated versions can be found under [External Dependencies](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies) in the Kubernetes release CHANGELOG.md.
### How can I access nodes created by Rancher?
SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical &#8942; button at the end of the row, and choose **Download Keys** as shown in the picture below.
![Download Keys]({{<baseurl>}}/img/rancher/downloadsshkeys.png)
Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host. Be sure to use the correct username (`rancher` or `docker` for RancherOS, `ubuntu` for Ubuntu, `ec2-user` for Amazon Linux)
```
$ ssh -i id_rsa user@ip_of_node
```
### How can I automate task X in Rancher?
The UI consists of static files, and works based on responses of the API. That means every action/task that you can execute in the UI, can be automated via the API. There are 2 ways to do this:
* Visit `https://your_rancher_ip/v3` and browse the API options.
* Capture the API calls when using the UI (Most commonly used for this is [Chrome Developer Tools](https://developers.google.com/web/tools/chrome-devtools/#network) but you can use anything you like)
### The IP address of a node changed, how can I recover?
A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and readd it. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster.
When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/cleaning-cluster-nodes/) to clean the node.
When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster.
### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
You can add additional arguments/binds/environment variables via the [Config File]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{<baseurl>}}/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{<baseurl>}}/rke/latest/en/example-yamls/).
### How do I check if my certificate chain is valid?
Use the `openssl verify` command to validate your certificate chain:
>**Note:** Configure `SSL_CERT_DIR` and `SSL_CERT_FILE` to a dummy location to make sure the OS installed certificates are not used when verifying manually.
```
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem rancher.yourdomain.com.pem
rancher.yourdomain.com.pem: OK
```
If you receive the error `unable to get local issuer certificate`, the chain is incomplete. This usually means that there is an intermediate CA certificate that issued your server certificate. If you already have this certificate, you can use it in the verification of the certificate like shown below:
```
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem -untrusted intermediate.pem rancher.yourdomain.com.pem
rancher.yourdomain.com.pem: OK
```
If you have successfully verified your certificate chain, you should include needed intermediate CA certificates in the server certificate to complete the certificate chain for any connection made to Rancher (for example, by the Rancher agent). The order of the certificates in the server certificate file should be first the server certificate itself (contents of `rancher.yourdomain.com.pem`), followed by intermediate CA certificate(s) (contents of `intermediate.pem`).
```
-----BEGIN CERTIFICATE-----
%YOUR_CERTIFICATE%
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
%YOUR_INTERMEDIATE_CERTIFICATE%
-----END CERTIFICATE-----
```
If you still get errors during verification, you can retrieve the subject and the issuer of the server certificate using the following command:
```
openssl x509 -noout -subject -issuer -in rancher.yourdomain.com.pem
subject= /C=GB/ST=England/O=Alice Ltd/CN=rancher.yourdomain.com
issuer= /C=GB/ST=England/O=Alice Ltd/CN=Alice Intermediate CA
```
### How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
Check `Common Name`:
```
openssl x509 -noout -subject -in cert.pem
subject= /CN=rancher.my.org
```
Check `Subject Alternative Names`:
```
openssl x509 -noout -in cert.pem -text | grep DNS
DNS:rancher.my.org
```
### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
This is due to a combination of the following default Kubernetes settings:
* kubelet
* `node-status-update-frequency`: Specifies how often kubelet posts node status to master (default 10s)
* kube-controller-manager
* `node-monitor-period`: The period for syncing NodeStatus in NodeController (default 5s)
* `node-monitor-grace-period`: Amount of time which we allow running Node to be unresponsive before marking it unhealthy (default 40s)
* `pod-eviction-timeout`: The grace period for deleting pods on failed nodes (default 5m0s)
See [Kubernetes: kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) and [Kubernetes: kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) for more information on these settings.
In Kubernetes v1.13, the `TaintBasedEvictions` feature is enabled by default. See [Kubernetes: Taint based Evictions](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions) for more information.
* kube-apiserver (Kubernetes v1.13 and up)
* `default-not-ready-toleration-seconds`: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
* `default-unreachable-toleration-seconds`: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
### Can I use keyboard shortcuts in the UI?
Yes, most parts of the UI can be reached using keyboard shortcuts. For an overview of the available shortcuts, press `?` anywhere in the UI.
@@ -0,0 +1,32 @@
---
title: Telemetry
weight: 8008
---
### What is Telemetry?
Telemetry collects aggregate information about the size of Rancher installations, versions of components used, and which features are used. This information is used by Rancher Labs to help make the product better and is not shared with third-parties.
### What information is collected?
No specific identifying information like usernames, passwords, or the names or addresses of user resources will ever be collected.
The primary things collected include:
- Aggregate counts (smallest, average, largest, total) of nodes per-cluster and their size (e.g. CPU cores & RAM).
- Aggregate counts of logical resources like Clusters, Projects, Namespaces, and Pods.
- Counts of what driver was used to deploy clusters and nodes (e.g. GKE vs EC2 vs Imported vs Custom).
- Versions of Kubernetes components, Operating Systems and Docker that are deployed on nodes.
- Whether some optional components are enabled or not (e.g. which auth providers are used).
- The image name & version of Rancher that is running.
- A unique randomly-generated identifier for this installation.
### Can I see the information that is being sent?
If Telemetry is enabled, you can go to `https://<your rancher server>/v1-telemetry` in your installation to see the current data.
If Telemetry is not enabled, the process that collects the data is not running, so there is nothing being collected to look at.
### How do I turn it on or off?
After initial setup, an administrator can go to the `Settings` page in the `Global` section of the UI and click Edit to change the `telemetry-opt` setting to either `in` or `out`.
@@ -0,0 +1,106 @@
---
title: Questions about Upgrading to Rancher v2.x
weight: 1
aliases:
- /rancher/v2.x/en/faq/upgrades-to-2x/
---
This page contains frequently asked questions about the changes between Rancher v1.x and v2.x, and how to upgrade from Rancher v1.x to v2.x.
# Kubernetes
**What does it mean when you say Rancher v2.x is built on Kubernetes?**
Rancher v2.x is a complete container management platform built 100% on Kubernetes leveraging its Custom Resource and Controller framework. All features are written as a CustomResourceDefinition (CRD) which extends the existing Kubernetes API and can leverage native features such as RBAC.
<br/>
**Do you plan to implement upstream Kubernetes, or continue to work on your own fork?**
We're still going to provide our distribution when you select the default option of having us create your Kubernetes cluster, but it will be very close to upstream.
<br/>
**Does this release mean that we need to re-train our support staff in Kubernetes?**
Yes. Rancher will offer the native Kubernetes functionality via `kubectl` but will also offer our own UI dashboard to allow you to deploy Kubernetes workload without having to understand the full complexity of Kubernetes. However, to fully leverage Kubernetes, we do recommend understanding Kubernetes. We do plan on improving our UX with subsequent releases to make Kubernetes easier to use.
<br/>
**Is a Rancher compose going to make a Kubernetes pod? Do we have to learn both now? We usually use the filesystem layer of files, not the UI.**
No. Unfortunately, the differences were enough such that we cannot support Rancher compose anymore in 2.x. We will be providing both a tool and guides to help with this migration.
<br/>
**If we use Kubernetes native YAML files for creating resources, should we expect that to work as expected, or do we need to use Rancher/Docker compose files to deploy infrastructure?**
Absolutely.
# Cattle
**How does Rancher v2.x affect Cattle?**
Cattle will not supported in v2.x as Rancher has been re-architected to be based on Kubernetes. You can, however, expect majority of Cattle features you use will exist and function similarly on Kubernetes. We will develop migration tools in Rancher v2.1 to help you transform your existing Rancher Compose files into Kubernetes YAML files.
<br/>
**Can I migrate existing Cattle workloads into Kubernetes?**
Yes. In the upcoming Rancher v2.1 release we will provide a tool to help translate existing Cattle workloads in Compose format to Kubernetes YAML format. You will then be able to deploy those workloads on the v2.x platform.
# Feature Changes
**Can we still add our own infrastructure services, which had a separate view/filter in 1.6.x?**
Yes. You can manage Kubernetes storage, networking, and its vast ecosystem of add-ons.
<br/>
**Are there changes to default roles available now or going forward? Will the Kubernetes alignment impact plans for roles/RBAC?**
The default roles will be expanded to accommodate the new Rancher 2.x features, and will also take advantage of the Kubernetes RBAC (Role-Based Access Control) capabilities to give you more flexibility.
<br/>
**Will there be any functions like network policies to separate a front-end container from a back-end container through some kind of firewall in v2.x?**
Yes. You can do so by leveraging Kubernetes' network policies.
<br/>
**What about the CLI? Will that work the same way with the same features?**
Yes. Definitely.
# Environments & Clusters
**Can I still create templates for environments and clusters?**
Starting with 2.0, the concept of an environment has now been changed to a Kubernetes cluster as going forward, only the Kubernetes orchestration engine is supported.
Kubernetes RKE Templates is on our roadmap for 2.x. Please refer to our Release Notes and documentation for all the features that we currently support.
<br/>
**Can you still add an existing host to an environment? (i.e. not provisioned directly from Rancher)**
Yes. We still provide you with the same way of executing our Rancher agents directly on hosts.
# Upgrading/Migrating
**How would the migration from v1.x to v2.x work?**
Due to the technical difficulty in transforming a Docker container into a pod running Kubernetes, upgrading will require users to "replay" those workloads from v1.x into new v2.x environments. We plan to ship with a tool in v2.1 to translate existing Rancher Compose files into Kubernetes YAML files. You will then be able to deploy those workloads on the v2.x platform.
<br/>
**Is it possible to upgrade from Rancher v1.x to v2.x without any disruption to Cattle and Kubernetes clusters?**
At this time, we are still exploring this scenario and taking feedback. We anticipate that you will need to launch a new Rancher instance and then relaunch on v2.x. Once you've moved to v2.x, upgrades will be in place, as they are in v1.6.
# Support
**Are you planning some long-term support releases for Rancher v1.6?**
That is definitely the focus of the v1.6 stream. We're continuing to improve that release, fix bugs, and maintain it. New releases of the v1.6 stream are announced in the [Rancher forums.](https://forums.rancher.com/c/announcements) The Rancher wiki contains the [v1.6 release notes.](https://github.com/rancher/rancher/wiki/Rancher-1.6)