- 15 Apr, 2020 1 commit
-
-
Matt Farina authored
The sorting previously used the selfref which contained the name on the end for all cases except pods. In that case the selfref was to pods and not the specific pod. This caused an issue where multiple pods had the same selfref used as the key for softing. The objects being sorted are tables that each have one row. In the new setup the key is the first cells value from the first and only row. This is the name of the resource. Note, the Get function now requests a table. The tests have been updated to return a Table type for the objects. Closes #7924 Signed-off-by:
Matt Farina <matt@mattfarina.com>
-
- 30 Mar, 2020 1 commit
-
-
Matt Farina authored
Closes #7812 Signed-off-by:
Matt Farina <matt@mattfarina.com>
-
- 25 Mar, 2020 1 commit
-
-
Matt Farina authored
Latest() should not have been added Validate Closes #7797 Signed-off-by:
Matt Farina <matt@mattfarina.com>
-
- 20 Mar, 2020 1 commit
-
-
Matt Farina authored
Changes to the Kubernetes API server and kubectl libraries caused the status to no longer display when helm status was run for a release. This change restores the status display. Generation of the tables for display was moved server side. A request for the data as a table is made and a kubectl printer for tables can display this data. Kubectl uses this setup and the structure here closely resembles kubectl. kubectl is still able to display objects as tables from prior to server side printing but only prints limited information. Note, an extra request is made because table responses cannot be easily transformed into Go objects for Kubernetes types to work with. There is one request to get the resources for display in a table and a second request to get the resources to lookup the related pods. The related pods are now requested as a table as well for display purposes. This is likely part of the larger trend to move features like this server side so that more libraries in more languages can get to the feature. Closes #6896 Signed-off-by:
Matt Farina <matt@mattfarina.com>
-
- 07 Nov, 2019 1 commit
-
-
Taylor Thomas authored
This happened to be a bug we identified in Helm 3 and did not check if it existed in Helm 2. The improved logic for job waiting used an automatic retry. However, when we were creating the watcher, we were listing on everything of that same api version and kind. So if you had more than 1 hook and the first was successful, it would think everything was successful. I have validated that this now fails as intended if a job is failing Closes #6767 Signed-off-by:
Taylor Thomas <taylor.thomas@microsoft.com>
-
- 06 Nov, 2019 1 commit
-
-
Taylor Thomas authored
In several of the job checks and other conversions we were using legacyscheme. I don't know why it was working before, but I am guessing something changed between k8s 1.15 and 1.16. To fix I changed the references to use the default scheme in client-go Signed-off-by:
Taylor Thomas <taylor.thomas@microsoft.com>
-
- 29 Oct, 2019 1 commit
-
-
Adam Reese authored
Signed-off-by:
Adam Reese <adam@reese.io>
-
- 11 Oct, 2019 1 commit
-
-
Matthew Fisher authored
.Get() calls perform() on a list of infos, populating two shared maps. perform() now concurrently calls the ResourceActorFunc concurrently based on GVK, causing a data race condition in .Get() This fixes that condition by locking the function to ensure these functions run serially for Helm 2 to fix the data race condition. This has since been optimized in Helm 3 so it's no longer an issue. Signed-off-by:
Matthew Fisher <matt.fisher@microsoft.com>
-
- 10 Oct, 2019 1 commit
-
-
Matthew Fisher authored
Signed-off-by:
Matthew Fisher <matt.fisher@microsoft.com>
-
- 08 Oct, 2019 3 commits
-
-
Jeff Knurek authored
well, more specifically returns an io.ReadCloser (giving the consumer more capabilities) Signed-off-by:
Jeff Knurek <j.knurek@travelaudience.com>
-
Jeff Knurek authored
Signed-off-by:
Jeff Knurek <j.knurek@travelaudience.com>
-
Jeff Knurek authored
Signed-off-by:
Jeff Knurek <j.knurek@travelaudience.com>
-
- 03 Oct, 2019 1 commit
-
-
Charlie Getzen authored
Signed-off-by:
Charlie Getzen <charlie.getzen@procore.com>
-
- 01 Oct, 2019 1 commit
-
-
Charlie Getzen authored
Signed-off-by:
Charlie Getzen <charlie.getzen@procore.com>
-
- 30 Aug, 2019 1 commit
-
-
Richard Connon authored
When waiting for resources use the `ListWatchUntil` instead of `UntilWithoutRetry` so that if the connection drops between tiller and the API while waiting the operation can still succeed. Signed-off-by:
Richard Connon <richard.connon@oracle.com>
-
- 23 Aug, 2019 1 commit
-
-
Sakura authored
Signed-off-by:
Sakura <longfei.shang@daocloud.io>
-
- 26 Jul, 2019 1 commit
-
-
Yusuke Kuoka authored
Probably since K8s 1.13.x, `converter.ConvertToVersion(info.Object, groupVersioner)` which is the body of `asVersioned` doesn't return an error or an "unstructured" object, but `apiextensions/v1beta1.CustomResourceDefinition`. The result was `helm upgrade` with any changes in CRD consistently failing. This fixes that by adding an additional case of the conversion result being `v1beta1.CustomResourceDefinition`. This is a backward-compatible change as it doesn't remove existing switch cases for older K8s versions. Fixes #5853 Signed-off-by:
Yusuke Kuoka <ykuoka@gmail.com>
-
- 24 Jul, 2019 1 commit
-
-
Tariq Ibrahim authored
Signed-off-by:
Tariq Ibrahim <tariq181290@gmail.com>
-
- 20 Jun, 2019 1 commit
-
-
Morten Torkildsen authored
Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 19 May, 2019 1 commit
-
-
Morten Torkildsen authored
Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 25 Apr, 2019 1 commit
-
-
Morten Torkildsen authored
Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 22 Apr, 2019 1 commit
-
-
Charlie Getzen authored
Signed-off-by:
Charlie Getzen <charlie.getzen@procore.com>
-
- 17 Apr, 2019 1 commit
-
-
Charlie Getzen authored
Signed-off-by:
Charlie Getzen <charlie.getzen@procore.com>
-
- 10 Apr, 2019 1 commit
-
-
Morten Torkildsen authored
Manifest validation is done by the builder, but it requires that the schema is set before the Stream function is called. Otherwise the StreamVisitor is created without a schema and no validation is done. Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 04 Apr, 2019 1 commit
-
-
Timofey Kirillov authored
Signed-off-by:
Timofey Kirillov <timofey.kirillov@flant.com>
-
- 30 Mar, 2019 1 commit
-
-
Morten Torkildsen authored
Makes sure CRDs installed through the crd_install hook reaches the `established` state before the hook is considered complete. Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 26 Mar, 2019 1 commit
-
-
Matthew Fisher authored
Signed-off-by:
Matthew Fisher <matt.fisher@microsoft.com>
-
- 25 Mar, 2019 1 commit
-
-
Pavel Eremeev authored
Signed-off-by:
Pavel Eremeev <selslack@users.noreply.github.com>
-
- 22 Mar, 2019 2 commits
-
-
Fernando Barbosa authored
Signed-off-by:
Timofey Kirillov <timofey.kirillov@flant.com>
-
Timofey Kirillov authored
This is the fix for only one particular, but important case. The case when a new resource has been added to the chart and there is an error in the chart, which leads to release failure. In this case after first failed release upgrade new resource will be created in the cluster. On the next release upgrade there will be the error: `no RESOURCE with the name NAME found` for this newly created resource from the previous release upgrade. The root of this problem is in the side effect of the first release process, Release invariant says: if resouce exists in the kubernetes cluster, then it should exist in the release storage. But this invariant has been broken by helm itself -- because helm created new resources as side effect and not adopted them into release storage. To maintain release invariant for such case during release upgrade operation all newly *successfully* created resources will be deleted in the case of an error in the subsequent resources update. Th...
-
- 16 Mar, 2019 1 commit
-
-
Pavel Eremeev authored
Signed-off-by:
Pavel Eremeev <selslack@users.noreply.github.com>
-
- 04 Mar, 2019 1 commit
-
-
Jacob LeGrone authored
Signed-off-by:
Jacob LeGrone <git@jacob.work>
-
- 07 Feb, 2019 1 commit
-
-
Ian Howell authored
Signed-off-by:
Ian Howell <ian.howell0@gmail.com>
-
- 29 Jan, 2019 1 commit
-
-
James Ravn authored
Don't delete a resource on upgrade if it is annotated with helm.io/resource-policy=keep. This can cause data loss for users if the annotation is ignored (e.g. for a PVC). Closes #3673 Signed-off-by:
James Ravn <james@r-vn.org>
-
- 08 Jan, 2019 2 commits
-
-
Elad Iwanir authored
Signed-off-by:
Elad Iwanir <eladiw@users.noreply.github.com>
-
Elad Iwanir authored
Signed-off-by:
Elad Iwanir <eladiw@users.noreply.github.com>
-
- 05 Dec, 2018 1 commit
-
-
Taylor Thomas authored
Signed-off-by:
Taylor Thomas <thomastaylor312@gmail.com>
-
- 29 Nov, 2018 1 commit
-
-
Morten Torkildsen authored
Currently the code that handles hooks uses a builder that creates the versioned types rather than unstructured. This results in an error whenever a custom resource is used in the hook as the type will not be registered in the scheme used in Helm. This changes this to use a builder that created unstructured resources and only converts to the versioned type when needed. Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 28 Nov, 2018 1 commit
-
-
Morten Torkildsen authored
Due to a regression from a previous change, details about pod resources does not show up in the status output. This makes sure that the pod type from core are passed in to the printer so the details are shown in the output. Signed-off-by:
Morten Torkildsen <mortent@google.com>
-
- 15 Nov, 2018 1 commit
-
-
Morten Torkildsen authored
The output from helm status is not correct for custom resources. The HumanReadablePrinter from Kubernetes only outputs the column names when the type differs from the previous one. This makes the output inconsistent and also creates problems for putting in the correct line breaks. This PR sets up a new printer for each type, thereby making sure that all types are printed with the correct use of line breaks and with column names. Signed-off-by:
Morten Torkildsen <mortent@google.com>
-