mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-14 00:53:22 +00:00
Update docs related to Resources menu change (#1830)
* Update docs about workloads * Update docs related to Resources menu change * Edit docs related to resource menu update * Edit HPA page
This commit is contained in:
@@ -101,7 +101,7 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to view workload metrics.
|
||||
|
||||
1. Select **Workloads > Workloads** in the navigation bar.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.
|
||||
|
||||
1. Select a specific workload and click on its name.
|
||||
|
||||
|
||||
@@ -15,9 +15,9 @@ Rancher's dashboards are available at multiple locations:
|
||||
|
||||
- **Cluster Dashboard**: From the **Global** view, navigate to the cluster.
|
||||
- **Node Metrics**: From the **Global** view, navigate to the cluster. Select **Nodes**. Find the individual node and click on its name. Click **Node Metrics.**
|
||||
- **Workload Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Click **Workload Metrics.**
|
||||
- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.**
|
||||
- **Pod Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Find the individual pod and click on its name. Click **Pod Metrics.**
|
||||
- **Container Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
|
||||
- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
|
||||
|
||||
Prometheus metrics are displayed and are denoted with the Grafana icon. If you click on the icon, the metrics will open a new tab in Grafana.
|
||||
|
||||
|
||||
+2
-2
@@ -10,14 +10,14 @@ _Persistent Volume Claims_ (or PVCs) are objects that request storage resources
|
||||
|
||||
- Rancher lets you create as many PVCs within a project as you'd like.
|
||||
- You can mount PVCs to a deployment as you create it, or later after its running.
|
||||
- Each Rancher project contains a list of PVCs that you've created, available from the **Volumes** tab. You can reuse these PVCs when creating deployments in the future.
|
||||
- Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions prior to v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
|
||||
|
||||
>**Prerequisite:**
|
||||
> You must have a pre-provisioned [persistent volume]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-a-persistent-volume) available for use, or you must have a [storage class created]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-storage-classes) that dynamically creates a volume upon request from the workload.
|
||||
|
||||
1. From the **Global** view, open the project containing a workload that you want to add a PVC to.
|
||||
|
||||
1. From the main menu, make sure that **Workloads** is selected. Then select the **Volumes** tab. Click **Add Volume**.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
|
||||
|
||||
@@ -67,7 +67,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
|
||||
|
||||

|
||||
|
||||
1. From the **Workloads** tab, find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -18,8 +18,8 @@ The way that you manage HPAs is different based on your version of the Kubernete
|
||||
|
||||
HPAs are also managed differently based on your version of Rancher:
|
||||
|
||||
- **For Rancher Prior to v2.3.0-alpha5:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
- **For Rancher v2.3.0-alpha5+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
|
||||
You might have additional HPA installation steps if you are using an older version of Rancher:
|
||||
|
||||
@@ -28,7 +28,7 @@ You might have additional HPA installation steps if you are using an older versi
|
||||
|
||||
## Testing HPAs with a Service Deployment
|
||||
|
||||
In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project's **HPA** tab. For more information, refer to [Get HPA Metrics and Status]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/).
|
||||
In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project and clicking **Resources > HPA.** For more information, refer to [Get HPA Metrics and Status]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/).
|
||||
|
||||
You can also use `kubectl` to get the status of HPAs that you test with your load testing tool. For more information, refer to [Testing HPAs with kubectl]
|
||||
({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/).
|
||||
+5
-5
@@ -3,7 +3,7 @@ title: Managing HPAs with the Rancher UI
|
||||
weight: 3028
|
||||
---
|
||||
|
||||
_Available as of v2.3.0-alpha5_
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale.
|
||||
|
||||
@@ -13,7 +13,7 @@ If you want to create HPAs that scale based on other metrics than CPU and memory
|
||||
|
||||
1. From the **Global** view, open the project that you want to deploy a HPA to.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **HPA** tab.
|
||||
1. Click **Resources > HPA.**
|
||||
|
||||
1. Click **Add HPA.**
|
||||
|
||||
@@ -29,13 +29,13 @@ If you want to create HPAs that scale based on other metrics than CPU and memory
|
||||
|
||||
1. Click **Create** to create the HPA.
|
||||
|
||||
> **Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's Workloads > HPA view.
|
||||
> **Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's Resources > HPA view.
|
||||
|
||||
## Get HPA Metrics and Status
|
||||
|
||||
1. From the **Global** view, open the project with the HPAs you want to look at.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **HPA** tab. The **HPA** tab shows the number of current replicas.
|
||||
1. Click **Resources > HPA.** The **HPA** tab shows the number of current replicas.
|
||||
|
||||
1. For more detailed metrics and status of a specific HPA, click the name of the HPA. This leads to the HPA detail page.
|
||||
|
||||
@@ -44,7 +44,7 @@ If you want to create HPAs that scale based on other metrics than CPU and memory
|
||||
|
||||
1. From the **Global** view, open the project that you want to delete an HPA from.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **HPA** tab.
|
||||
1. Click **Resources > HPA.**
|
||||
|
||||
1. Find the HPA which you would like to delete.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
|
||||
|
||||
1. From the **Global** view, open the project that you want to add ingress to.
|
||||
|
||||
1. Select the **Load Balancing** tab. Then click **Add Ingress**.
|
||||
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
|
||||
|
||||
1. Enter a **Name** for the ingress.
|
||||
|
||||
|
||||
@@ -41,7 +41,7 @@ After the version control provider is authorized, you are automatically re-direc
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Click on **Configure Repositories**.
|
||||
|
||||
@@ -59,7 +59,7 @@ Now that repositories are added to your project, you can start configuring the p
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to set up a pipeline for. Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Throughout the next couple of steps, we'll provide the options of how to do pipeline configuration through the UI or the YAML file.
|
||||
|
||||
@@ -231,7 +231,7 @@ timeout: 30
|
||||
|
||||
## Running your Pipelines
|
||||
|
||||
Run your pipeline for the first time. From the **Pipeline** tab, find your pipeline and select the vertical **Ellipsis (...) > Run**.
|
||||
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **Ellipsis (...) > Run**.
|
||||
|
||||
During this initial run, your pipeline is tested, and the following [pipeline components]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/pipelines/#how-pipelines-work) are deployed to your project as workloads in a new namespace dedicated to the pipeline:
|
||||
|
||||
@@ -257,7 +257,7 @@ Available Events:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to modify the event triggers. Select the vertical **Ellipsis (...) > Setting**.
|
||||
|
||||
@@ -553,7 +553,7 @@ Wildcard character (`*`) expansion is supported in `branch` conditions.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**.
|
||||
|
||||
@@ -571,7 +571,7 @@ Wildcard character (`*`) expansion is supported in `branch` conditions.
|
||||
{{% tab "Stage Trigger" %}}
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**.
|
||||
|
||||
@@ -596,7 +596,7 @@ Wildcard character (`*`) expansion is supported in `branch` conditions.
|
||||
{{% tab "Step Trigger" %}}
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**.
|
||||
|
||||
@@ -654,7 +654,7 @@ When configuring a pipeline, certain [step types](#step-types) allow you to use
|
||||
{{% tab "By UI" %}}
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Edit Config**.
|
||||
|
||||
@@ -703,7 +703,7 @@ Create a secret in the same project as your pipeline, or explicitly in the names
|
||||
{{% tab "By UI" %}}
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Edit Config**.
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ By default, the example pipeline repositories are disabled. Enable one (or more)
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Click **Configure Repositories**.
|
||||
|
||||
@@ -45,7 +45,7 @@ After enabling an example repository, review the pipeline to see how it is set u
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **Ellipsis (...)**. There are two ways to view the pipeline:
|
||||
* **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline.
|
||||
@@ -57,7 +57,7 @@ After enabling an example repository, run the pipeline to see how it works.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Select **Workloads** in the navigation bar and then select the **Pipelines** tab.
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **Ellipsis (...) > Run**.
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@ You can deploy a workload with an image from a private registry through the Ranc
|
||||
To deploy a workload with an image from your private registry,
|
||||
|
||||
1. Go to the project view,
|
||||
1. Go to the **Workloads** tab.
|
||||
1. Click **Resources > Workloads.** In versions prior to v2.3.0, go to the **Workloads** tab.
|
||||
1. Click **Deploy.**
|
||||
1. Enter a unique name for the workload and choose a namespace.
|
||||
1. In the **Docker Image** field, enter the URL of the path to the Docker image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io/<Quay profile name>/<Image name>`.
|
||||
|
||||
@@ -12,7 +12,7 @@ However, you also have the option of creating additional Service Discovery recor
|
||||
|
||||
1. From the **Global** view, open the project that you want to add a DNS record to.
|
||||
|
||||
1. Select the **Service Discovery** tab. Then click **Add Record**.
|
||||
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions prior to v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
|
||||
|
||||
1. Enter a **Name** for the DNS record. This name is used for DNS resolution.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod.
|
||||
|
||||
1. From the **Global** view, open the project running the workload you want to add a sidecar to.
|
||||
|
||||
1. Select the **Workloads** tab.
|
||||
1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
|
||||
|
||||
1. Find the workload that you want to extend. Select **Ellipsis icon (...) > Add a Sidecar**.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Deploy a workload to run an application in one or more containers.
|
||||
|
||||
1. From the **Global** view, open the project that you want to deploy a workload to.
|
||||
|
||||
1. From the **Workloads** view, click **Deploy**.
|
||||
1. 1. Click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
|
||||
|
||||
1. Enter a **Name** for the workload.
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Istio
|
||||
weight: 3528
|
||||
---
|
||||
|
||||
_Available as of v2.3.0-alpha5_
|
||||
_Available as of v2.3.0_
|
||||
|
||||
Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.
|
||||
|
||||
|
||||
@@ -254,7 +254,7 @@ The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelin
|
||||
|
||||
### A. Configuring Persistent Data for Docker Registry
|
||||
|
||||
1. From the project that you're configuring a pipeline for, select the **Workloads** tab.
|
||||
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
|
||||
|
||||
1. Find the `docker-registry` workload and select **Ellipsis (...) > Edit**.
|
||||
|
||||
@@ -301,7 +301,7 @@ The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelin
|
||||
|
||||
### B. Configuring Persistent Data for Minio
|
||||
|
||||
1. From the **Workloads** tab, find the `minio` workload and select **Ellipsis (...) > Edit**.
|
||||
1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **Ellipsis (...) > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
|
||||
@@ -35,9 +35,7 @@ You can set up your pipeline to run a series of stages and steps to test your co
|
||||
|
||||
1. Go to the project you want this pipeline to run in.
|
||||
|
||||
2. Select workloads from the top level Nav bar
|
||||
|
||||
3. Select pipelines from the secondary Nav bar
|
||||
2. Click **Resources > Pipelines.** In versions prior to v2.3.0,click **Workloads > Pipelines.**
|
||||
|
||||
4. Click Add pipeline button.
|
||||
|
||||
|
||||
+2
-2
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. From the main menu select **Workloads**, then click on the **Workloads** tab.
|
||||
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
@@ -49,7 +49,7 @@ Now that the application is up and running it needs to be exposed so that other
|
||||
|
||||
3. Open the **Default** project.
|
||||
|
||||
4. From the main menu select **Workloads**, then click on the **Load Balancing** tab.
|
||||
4. Click **Resources > Workloads > Load Balancing.** In versions prior to v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
|
||||
|
||||
5. Click **Add Ingress**.
|
||||
|
||||
|
||||
+1
-1
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. From the main menu select **Workloads**, then click on the **Workloads** tab.
|
||||
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
|
||||
@@ -62,7 +62,7 @@ In the image below, the `web-deployment.yml` and `web-service.yml` files [create
|
||||
|
||||
Just as you can create an alias for Rancher v1.6 services, you can do the same for Rancher v2.x workloads. Similarly, you can also create DNS records pointing to services running externally, using either their hostname or IP address. These DNS records are Kubernetes service objects.
|
||||
|
||||
Using the v2.x UI, use the context menu to navigate to the `Project` view and choose the **Service Discovery** tab. All existing DNS records created for your workloads are listed under each namespace.
|
||||
Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions prior to v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace.
|
||||
|
||||
Click **Add Record** to create new DNS records. Then view the various options supported to link to external services or to create aliases for another workload, DNS record, or set of pods.
|
||||
|
||||
|
||||
@@ -62,12 +62,12 @@ Although Rancher v2.x supports HTTP and HTTPS hostname and path-based load balan
|
||||
|
||||
## Deploying Ingress
|
||||
|
||||
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Workloads** from the main menu. Then choose the **Load Balancing** tab and begin by clicking **Deploy**. During deployment, you can choose a target project or namespace.
|
||||
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace.
|
||||
|
||||
>**Prerequisite:** Before deploying Ingress, you must have a workload deployed that's running a scale of two or more pods.
|
||||
>
|
||||
|
||||
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and then select the **Load Balancing** tab. Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
|
||||
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
|
||||
|
||||
Similar to a service/port rules in Rancher v1.6, here you can specify rules targeting your workload's container port. The sections below demonstrate how to create Ingress rules.
|
||||
|
||||
|
||||
@@ -259,7 +259,7 @@ Use the following Rancher CLI commands to deploy your application using Rancher
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Workloads** and **Service Discovery** tabs.
|
||||
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x prior to v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
|
||||
|
||||
## What Now?
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ Rancher schedules pods to the node you select if 1) there are compute resource a
|
||||
|
||||
If you expose the workload using a NodePort that conflicts with another workload, the deployment gets created successfully, but no NodePort service is created. Therefore, the workload isn't exposed outside of the cluster.
|
||||
|
||||
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the **Workloads** tab, click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
|
||||
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
|
||||
|
||||
<!--
|
||||
|
||||
|
||||
Reference in New Issue
Block a user