Date   

Re: Helm 3 impersonation

Matt Fisher <matt.fisher@...>
 

This feature is being tracked in https://github.com/helm/helm/issues/5303. It is not currently available. However, the implementation details seem easy enough for someone new to the Helm code base to tackle as their first contribution.

Would you be interested in implementing that functionality?


1504220459230_microsoft.png

Matthew Fisher

Caffeinated Software Engineer

Microsoft Canada


From: cncf-helm@... <cncf-helm@...> on behalf of Ashik Mohammed via Lists.Cncf.Io <ashikmohammed=gmail.com@...>
Sent: Thursday, February 6, 2020 6:20 PM
To: cncf-helm@... <cncf-helm@...>
Cc: cncf-helm@... <cncf-helm@...>
Subject: [cncf-helm] Helm 3 impersonation
 
Does helm 3 supports impersonation similar to ‘as’ in kubectl?


Helm 3 impersonation

Ashik Mohammed
 

Does helm 3 supports impersonation similar to ‘as’ in kubectl?


Re: Helm 3 "state"? Is there such a thing

Abhirama <abhirama@...>
 

Thank you very much, Fox and Kevin. That cleared up my confusion.


On Thu, Feb 6, 2020 at 1:35 AM Fox, Kevin M <Kevin.Fox@...> wrote:
As far as I can tell, the "state" is made up of:
* caches
* credentials
* repos

So long as your stateless job (k8s pod) adds whatever credentials it needs and repo's as part of the job you should be fine loosing the state dirs when the pod is destroyed.

Thanks,
Kevin

________________________________________
From: cncf-helm@... <cncf-helm@...> on behalf of Abhirama <abhirama@...>
Sent: Tuesday, February 4, 2020 8:46 PM
To: cncf-helm@...
Subject: [cncf-helm] Helm 3 "state"? Is there such a thing

Hi,

I'm sorry if the question comes across as too lame.

Now that Helm 3 is client-only, I understand that it stores "state" information along with the client in the directories determined by the environment variables  XDG_CACHE_HOME, XDG_CONFIG_HOME and XDG_DATA_HOME. As a result, if Helm were to run on "stateless" machines (say if it were to run on a k8s pod every time it's run), does it still need access to the state it maintains in the aforementioned directories?

I've gone through https://helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches<https://protect2.fireeye.com/v1/url?k=8fc82a5b-d37d15e2-8fc8004e-0cc47adc5fce-21e70389bd088c24&q=1&e=47578958-1bc5-4e52-8ace-0cf551c00ea5&u=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches> and https://helm.sh/docs/faq/#xdg-base-directory-support<https://protect2.fireeye.com/v1/url?k=63b63cd0-3f030369-63b616c5-0cc47adc5fce-2e0c98b88fbbb452&q=1&e=47578958-1bc5-4e52-8ace-0cf551c00ea5&u=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support> and wasn't able to completely iron out my confusion. My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

I'd greatly appreciate your help in clarifying this.

Thanks,
Abhirama.


Re: Helm 3 "state"? Is there such a thing

Abhirama <abhirama@...>
 

Sorry, I meant to say "Thanks, Kevin and Matthew"!


On Thu, Feb 6, 2020 at 2:45 AM Matt Fisher <Matt.Fisher@...> wrote:
> My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

Correct. State about the current release is stored server-side (in Kubernetes) as a Secret. The three-way merge patch pulls down this metadata, comparing it to the chart you are upgrading to (`helm upgrade release_name *CHART*`), and the live state.

Stuff stored in the XDG directories contain local setup information, such as
  • repositories you've added (`helm repo list`)
  • plugins you have installed
  • cached repository indices
  • cached charts pulled down from said repositories (either via `helm install` or `helm pull`)
It is entirely safe to run Helm from an ephemeral filesystem, such as a container. No state about what was installed in the cluster is stored locally.


Hope that answers your question.


1504220459230_microsoft.png

Matthew Fisher

Caffeinated Software Engineer

Microsoft Canada


From: cncf-helm@... <cncf-helm@...> on behalf of Fox, Kevin M via Lists.Cncf.Io <Kevin.Fox=pnnl.gov@...>
Sent: Wednesday, February 5, 2020 12:05 PM
To: Abhirama <abhirama@...>; cncf-helm@... <cncf-helm@...>
Cc: cncf-helm@... <cncf-helm@...>
Subject: Re: [cncf-helm] Helm 3 "state"? Is there such a thing
 
As far as I can tell, the "state" is made up of:
* caches
* credentials
* repos

So long as your stateless job (k8s pod) adds whatever credentials it needs and repo's as part of the job you should be fine loosing the state dirs when the pod is destroyed.

Thanks,
Kevin

________________________________________
From: cncf-helm@... <cncf-helm@...> on behalf of Abhirama <abhirama@...>
Sent: Tuesday, February 4, 2020 8:46 PM
To: cncf-helm@...
Subject: [cncf-helm] Helm 3 "state"? Is there such a thing

Hi,

I'm sorry if the question comes across as too lame.

Now that Helm 3 is client-only, I understand that it stores "state" information along with the client in the directories determined by the environment variables  XDG_CACHE_HOME, XDG_CONFIG_HOME and XDG_DATA_HOME. As a result, if Helm were to run on "stateless" machines (say if it were to run on a k8s pod every time it's run), does it still need access to the state it maintains in the aforementioned directories?

I've gone through https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=iU6D6%2BlcJ52vp5P1NJbfmCSIKlqPNti%2B8ZLswWCFuxE%3D&amp;reserved=0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=iU6D6%2BlcJ52vp5P1NJbfmCSIKlqPNti%2B8ZLswWCFuxE%3D&amp;reserved=0> and https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=uXYvT8fSYoeGWHBP%2BGfzO%2BNC2zvrHBjPp0Y1Bgd91Ls%3D&amp;reserved=0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=uXYvT8fSYoeGWHBP%2BGfzO%2BNC2zvrHBjPp0Y1Bgd91Ls%3D&amp;reserved=0> and wasn't able to completely iron out my confusion. My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

I'd greatly appreciate your help in clarifying this.

Thanks,
Abhirama.





Re: Helm 3 "state"? Is there such a thing

Matt Fisher <matt.fisher@...>
 

> My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

Correct. State about the current release is stored server-side (in Kubernetes) as a Secret. The three-way merge patch pulls down this metadata, comparing it to the chart you are upgrading to (`helm upgrade release_name *CHART*`), and the live state.

Stuff stored in the XDG directories contain local setup information, such as
  • repositories you've added (`helm repo list`)
  • plugins you have installed
  • cached repository indices
  • cached charts pulled down from said repositories (either via `helm install` or `helm pull`)
It is entirely safe to run Helm from an ephemeral filesystem, such as a container. No state about what was installed in the cluster is stored locally.


Hope that answers your question.


1504220459230_microsoft.png

Matthew Fisher

Caffeinated Software Engineer

Microsoft Canada


From: cncf-helm@... <cncf-helm@...> on behalf of Fox, Kevin M via Lists.Cncf.Io <Kevin.Fox=pnnl.gov@...>
Sent: Wednesday, February 5, 2020 12:05 PM
To: Abhirama <abhirama@...>; cncf-helm@... <cncf-helm@...>
Cc: cncf-helm@... <cncf-helm@...>
Subject: Re: [cncf-helm] Helm 3 "state"? Is there such a thing
 
As far as I can tell, the "state" is made up of:
* caches
* credentials
* repos

So long as your stateless job (k8s pod) adds whatever credentials it needs and repo's as part of the job you should be fine loosing the state dirs when the pod is destroyed.

Thanks,
Kevin

________________________________________
From: cncf-helm@... <cncf-helm@...> on behalf of Abhirama <abhirama@...>
Sent: Tuesday, February 4, 2020 8:46 PM
To: cncf-helm@...
Subject: [cncf-helm] Helm 3 "state"? Is there such a thing

Hi,

I'm sorry if the question comes across as too lame.

Now that Helm 3 is client-only, I understand that it stores "state" information along with the client in the directories determined by the environment variables  XDG_CACHE_HOME, XDG_CONFIG_HOME and XDG_DATA_HOME. As a result, if Helm were to run on "stateless" machines (say if it were to run on a k8s pod every time it's run), does it still need access to the state it maintains in the aforementioned directories?

I've gone through https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=iU6D6%2BlcJ52vp5P1NJbfmCSIKlqPNti%2B8ZLswWCFuxE%3D&amp;reserved=0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=iU6D6%2BlcJ52vp5P1NJbfmCSIKlqPNti%2B8ZLswWCFuxE%3D&amp;reserved=0> and https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=uXYvT8fSYoeGWHBP%2BGfzO%2BNC2zvrHBjPp0Y1Bgd91Ls%3D&amp;reserved=0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support&amp;data=02%7C01%7Cmatt.fisher%40microsoft.com%7Ca90790d766fa472d4a4308d7aa76d525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637165299746162558&amp;sdata=uXYvT8fSYoeGWHBP%2BGfzO%2BNC2zvrHBjPp0Y1Bgd91Ls%3D&amp;reserved=0> and wasn't able to completely iron out my confusion. My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

I'd greatly appreciate your help in clarifying this.

Thanks,
Abhirama.





Re: Helm 3 "state"? Is there such a thing

Fox, Kevin M <Kevin.Fox@...>
 

As far as I can tell, the "state" is made up of:
* caches
* credentials
* repos

So long as your stateless job (k8s pod) adds whatever credentials it needs and repo's as part of the job you should be fine loosing the state dirs when the pod is destroyed.

Thanks,
Kevin

________________________________________
From: cncf-helm@... <cncf-helm@...> on behalf of Abhirama <abhirama@...>
Sent: Tuesday, February 4, 2020 8:46 PM
To: cncf-helm@...
Subject: [cncf-helm] Helm 3 "state"? Is there such a thing

Hi,

I'm sorry if the question comes across as too lame.

Now that Helm 3 is client-only, I understand that it stores "state" information along with the client in the directories determined by the environment variables XDG_CACHE_HOME, XDG_CONFIG_HOME and XDG_DATA_HOME. As a result, if Helm were to run on "stateless" machines (say if it were to run on a k8s pod every time it's run), does it still need access to the state it maintains in the aforementioned directories?

I've gone through https://helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches<https://protect2.fireeye.com/v1/url?k=8fc82a5b-d37d15e2-8fc8004e-0cc47adc5fce-21e70389bd088c24&q=1&e=47578958-1bc5-4e52-8ace-0cf551c00ea5&u=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23improved-upgrade-strategy-3-way-strategic-merge-patches> and https://helm.sh/docs/faq/#xdg-base-directory-support<https://protect2.fireeye.com/v1/url?k=63b63cd0-3f030369-63b616c5-0cc47adc5fce-2e0c98b88fbbb452&q=1&e=47578958-1bc5-4e52-8ace-0cf551c00ea5&u=https%3A%2F%2Fhelm.sh%2Fdocs%2Ffaq%2F%23xdg-base-directory-support> and wasn't able to completely iron out my confusion. My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

I'd greatly appreciate your help in clarifying this.

Thanks,
Abhirama.


Helm 3 "state"? Is there such a thing

Abhirama <abhirama@...>
 

Hi,

I'm sorry if the question comes across as too lame.

Now that Helm 3 is client-only, I understand that it stores "state" information along with the client in the directories determined by the environment variables  XDG_CACHE_HOME, XDG_CONFIG_HOME and XDG_DATA_HOME. As a result, if Helm were to run on "stateless" machines (say if it were to run on a k8s pod every time it's run), does it still need access to the state it maintains in the aforementioned directories?

I've gone through https://helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches and https://helm.sh/docs/faq/#xdg-base-directory-support and wasn't able to completely iron out my confusion. My guess is that, during the 3-way merge, the mentioned "old manifest" is directly retrieved from the k8s master itself and that Helm doesn't store any release related information in the aforementioned directories.

I'd greatly appreciate your help in clarifying this.

Thanks,
Abhirama.


Re: Ordering of Helm subcharts

Bernd Adamowicz
 

Hi Paul!

Thanks for answering. Actually I already tried your second approach with some scripting voodoo around it which of course worked. But meanwhile I came across a better solution by using the k8s resource List. Looks like this:

apiVersion: v1
kind: List
items:
  - apiVersion: {{ .Values.tekton.apiVersion }}
    kind: Task
    metadata:
...
  - apiVersion: {{ .Values.tekton.apiVersion }}
    kind: Pipeline
    metadata:
...
  - apiVersion: {{ .Values.tekton.apiVersion }}
    kind: PipelineRun
    metadata:
...

However, for me the question remains if it is sufficient that Helm sorts resources only by a hard wired source file? Wouldn't it be better to get all available resources from the API and sort those? I'm not sure and like to hear some opinions.


Re: Ordering of Helm subcharts

Paul Czarkowski <pczarkowski@...>
 

Hi Bernd!
I could see two options:

* use the `post-install` hook for the pipeline-run ?   
* use two helm charts, one for the tasks and pipeline, the other for the pipeline-run.   the latter can have the former as a dependency, or you could use helmfile or similar to order them.

On Tue, Jan 21, 2020 at 7:02 AM Bernd Adamowicz <bernd.adamowicz@...> wrote:

According to the documentation here it is possible to declare dependencies in Heln charts using the charts subdirectory. However, the order in which the K8s resources will be deployed depends on the implementation of kind_sorter.go. Here only native K8s resources will be taken into account.

Now I've got a Helm chart dependency with no native K8s resources at all. Instead only Tekton pipeline resources are used which obviously (after several attempts) makes the order of the deployed Tekton resources unpredictable.

Actually I want to have these Tekton resources deployed:

  • first: Tekton task
  • second: Tekton pipeline
  • third: Tekton pipeline-run

And I have created this chart structure to achieve it (of course with all the necessary files in place):

pipeline-run/charts/pipeline/charts/task/

As mentioned the order is not predictable and mostly the pipeline run is started before the tasks are available which leads to an error.

Now I'm not sure if this is worth a feature request which says to include not only native K8s resources but instead all resources available inside the cluster.

I'd like to hear some opinions. Thanks!


Dynamic Helm chart Creation

smart.imran003@...
 
Edited

Hello All,

I want to use templates in overall umbrella chart and include/build only required helm chart.
Which means the Final umbrella chart(main values.yaml) should only contain information about desired chart which i really need to build instead of building the entire chart(with applications which i really don't need for a partifuclar build).

Example:
i have a main values.yaml with below applications

ElasticSearch
Kibana
Grafana
Logstash

But i don't want to build kibana in one build and grafana in the other, Which means i don't want the definition itself to be available in the values.yaml

Is it possible to program something like thiat? Using goLang, Please recommend something for it.

This is because, when i deliver the chart to customer, i dont want the customer to know that we have multiple applications are build using this chart.

Sorry for my grammar.

Thanks,
Syed


Ordering of Helm subcharts

Bernd Adamowicz
 

According to the documentation here it is possible to declare dependencies in Heln charts using the charts subdirectory. However, the order in which the K8s resources will be deployed depends on the implementation of kind_sorter.go. Here only native K8s resources will be taken into account.

Now I've got a Helm chart dependency with no native K8s resources at all. Instead only Tekton pipeline resources are used which obviously (after several attempts) makes the order of the deployed Tekton resources unpredictable.

Actually I want to have these Tekton resources deployed:

  • first: Tekton task
  • second: Tekton pipeline
  • third: Tekton pipeline-run

And I have created this chart structure to achieve it (of course with all the necessary files in place):

pipeline-run/charts/pipeline/charts/task/

As mentioned the order is not predictable and mostly the pipeline run is started before the tasks are available which leads to an error.

Now I'm not sure if this is worth a feature request which says to include not only native K8s resources but instead all resources available inside the cluster.

I'd like to hear some opinions. Thanks!


Re: Helm Charts Hosts on OCI Registry

Josh Dolitsky
 

Hello, thanks for sharing. I took a look at the source. Maybe you can explain the project a bit more- is the purpose of this so that you can see your charts listed in "helm chart list" upon upload via ChartMuseum API? OCI distribution has its own API for upload that is being used in "helm chart push": https://github.com/opencontainers/distribution-spec/blob/master/spec.md#pushing-an-image

If your goal is to have a chart repo in front, and registry in the back, you might consider submitting a new "OCI" backend to chartmuseum/storage: https://github.com/chartmuseum/storage

Here is related GitHub issue: https://github.com/helm/chartmuseum/issues/237

Josh

On Thu, Jan 9, 2020 at 3:30 AM Hang Yan <hangyan@...> wrote:
https://github.com/hangyan/chart-registry. Hello everyone, i have create a new project to explore the idea that use OCI registry as chart repo. Welcome to check it out


Helm Charts Hosts on OCI Registry

Hang Yan
 

https://github.com/hangyan/chart-registry. Hello everyone, i have create a new project to explore the idea that use OCI registry as chart repo. Welcome to check it out


Need help to my query

Somavarapu, Bharathkumar <Bharathkumar.Somavarapu2@...>
 

Hi Team,

 

We are using helm in our kubernetes, below are the 2 commands we have created, could you please let me know the difference between those 2 commands.

 

subprocess.check_output(["helm", "list", "--kubeconfig", kubeconfig_path, "--tiller-namespace", namespace, release])

 

subprocess.check_output(["helm", "delete", release, "--kubeconfig", kubeconfig_path, "--purge", "--tiller-namespace", namespace])

 

Thanks

Bharath.S


Helm change the storageclass inside the container under Java Opts to change it to EFS

kousik.d@...
 

How do I change the container config in the stateful file to change the storage class? 


Re: Helm chart versioning Umbrella chart - CI/CD

zouhair hamza
 

Thank you for your reply!

Can you explain your suggest a bit more " bumping the version of the umbrella chart (minor version at least). " ?

Thanks


Re: Helm chart versioning Umbrella chart - CI/CD

Devdatta Kulkarni
 

I would suggest bumping the version of the umbrella chart (minor version at least). That way you will be able to track back which umbrella chart contains your new child chart.

- Devdatta


From: cncf-helm@... <cncf-helm@...> on behalf of Hamza.lil01 via Lists.Cncf.Io <Hamza.lil01=gmail.com@...>
Sent: Monday, December 9, 2019 2:54 PM
To: cncf-helm@... <cncf-helm@...>
Cc: cncf-helm@... <cncf-helm@...>
Subject: [cncf-helm] Helm chart versioning Umbrella chart - CI/CD
 
Hello,

Versioning chart was always a challenge specially on CI/CD pipeline.

I have a CI pipeline that contains a child chart directory that will be pushed to a repository with the same version as docker image and then it will trigger the CD repository. The CD will download the child chart under an umbrella chart to deploy the whole thing.

So I was wondering what are the best practice and how to version umbrella chart ? How to handle the whole versioning thing child chart + umbrella chart ? should it be the same version for both ?

Thanks


Helm chart versioning Umbrella chart - CI/CD

zouhair hamza
 

Hello,

Versioning chart was always a challenge specially on CI/CD pipeline.

I have a CI pipeline that contains a child chart directory that will be pushed to a repository with the same version as docker image and then it will trigger the CD repository. The CD will download the child chart under an umbrella chart to deploy the whole thing.

So I was wondering what are the best practice and how to version umbrella chart ? How to handle the whole versioning thing child chart + umbrella chart ? should it be the same version for both ?

Thanks


Helm 3.0.0 has been Released!

Matt Fisher <matt.fisher@...>
 

Helm 3.0.0 has been released! Congratulations to everyone and a big thank you to everyone who helped contribute to the release.



1504220459230_microsoft.png

Matthew Fisher

Caffeinated Software Engineer

Microsoft Canada


Re: Troubleshooting a Helm installation

Matt Fisher <matt.fisher@...>
 

There's a section about this in the troubleshooting documentation: https://helm.sh/docs/tiller_ssl/#troubleshooting

Let us know if this helps.

1504220459230_microsoft.png

Matthew Fisher

Caffeinated Software Engineer

Microsoft Canada


From: cncf-helm@... <cncf-helm@...> on behalf of bert.laverman via Lists.Cncf.Io <bert.laverman=axoniq.io@...>
Sent: Wednesday, October 30, 2019 2:26 AM
To: cncf-helm@... <cncf-helm@...>
Cc: cncf-helm@... <cncf-helm@...>
Subject: [cncf-helm] Troubleshooting a Helm installation
 
Hello Helm!
I have a Helm installation set up to install some of our DevOps toolchain, which generally works fine and as such tends to get "ignored" on the Ops side. As a result helm commands are only run rarely (once every few months).

My current task started with "Let's add a PostGreSQL installation, and then ran into "Bad Certificate" errors. Unfortunately there is no Helm-for-Helm, and the documentation describes how to setup/install Helm securely pretty well, but not so much how to find out why it stopped working. All applications I have installed work fine, but Helm itself refuses to do anything. My last bit of work was on refreshing the certificates (which naturally ran out unnoticed), and I could reinstall our Ingress Controller after that, but 4 weeks onwards it refuses to budge.

"helm list --tls --debug" simply responds with
[debug] Created tunnel using local port: '55563'

[debug] SERVER: "127.0.0.1:55563"

[debug] Key="/Users/bertlaverman/.helm/key.pem", Cert="/Users/bertlaverman/.helm/cert.pem", CA="/Users/bertlaverman/.helm/ca.pem"

Error: remote error: tls: bad certificate

This does not really tell me anything beyond that I have some kind of SSL related issue. Which certificate is in trouble, Helm or Tiller, is unknown, nor what exactly is Helm's beef with it.
Who can help me with some basic troubleshooting guidelines?

I don't mind cleaning the resulting story up for addition to the docs, but for now I'm kind of looking at having to spend days on trying to find out what happened here.

Cheers,
Bert

181 - 200 of 454