Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #2311: change layer2status object namespace and add owner reference #2351

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

lwabish
Copy link
Contributor

@lwabish lwabish commented Apr 1, 2024

Is this a BUG FIX or a FEATURE ?:

Uncomment only one, leave it on its own line:

/kind bug

/kind cleanup
/kind feature
/kind design
/kind flake
/kind failing
/kind documentation
/kind regression

What this PR does / why we need it:

fix #2311

Special notes for your reviewer:

Release note:

Enable Layer2Service status generation

@lwabish
Copy link
Contributor Author

lwabish commented Apr 1, 2024

Unit test and e2e test is coming soon. Please have a look at the implement first, thanks!

Copy link
Member

@oribon oribon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot for pushing this forward!
Without deep-diving into the splitting logic here, this seems a bit cumbersome. On the issue I mentioned that a reasonable approach might be naming the statuses GenerateName: <node-name>-, and from there we can rely on the labels of the resource (to understand to which service it belongs). then we can add the service's name/namespace to the kubectl output and it should be readable. wdyt?

@lwabish
Copy link
Contributor Author

lwabish commented Apr 1, 2024

thanks a lot for pushing this forward! Without deep-diving into the splitting logic here, this seems a bit cumbersome. On the issue I mentioned that a reasonable approach might be naming the statuses GenerateName: <node-name>-, and from there we can rely on the labels of the resource (to understand to which service it belongs). then we can add the service's name/namespace to the kubectl output and it should be readable. wdyt?

I think most of the complexity came from how to determine the actual namespace of the service.

As we designed before, the reconciler is receiving name/namespace from both the channel and the cluster cr changes, after moving the status objects to speaker namespace, it can't determine the service namespace directly from req.Namespace because req.Namespace is no longer always the namespace of the service.

Also, if the reconciler is triggered by the channel event, in which case the cr has not been created yet, getting information from labels seems not possible.

So In my implement, I used svcName-svcNamespace-nodeName as the cr name. In this case, I can split this string to get every part I need(although the process behind this is a little cumbersome because I have to solve the problem that svcName and svcNamespace may contain the splitter -)

Maybe you can give me an example of the GenerateName: <node-name>- to help me understand the detail of your previous idea?

@oribon
Copy link
Member

oribon commented Apr 1, 2024

oh I see now what's the problem I haven't thought about it all the way through. It might be that generating a name will also be not so easy. I'll think about it and try to go over this tomorrow 😅 thanks!

@oribon
Copy link
Member

oribon commented Apr 2, 2024

I thought about this for a while and might have a reasonable alternative that should work in theory.
Taking what you said about understanding stuff from the resource's name being problematic I suggest we try a slightly different approach:

  1. Instead of using a predefined resource name, we opt to using some generated name (such as GenerateName: <svc-name>- / l2- / etc.)
  2. We make sure that reconcile requests are raised for servicename+namespace instead of for the l2status resource name. That will require slightly modifying the current eventHandler (EnqueueRequestsFromMapFunc) you wrote, and replacing the current For(&v1beta1.ServiceL2Status{}) with another eventHandler approach Named("servicel2status").Watches(&v1beta1.ServiceL2Status{}, handler.EnqueueRequestsFromMapFunc(...) . In the new eventHandler we can look at the object's labels to understand which service name + namespace is relevant for the resource (this probably means we add a service namespace label as well) and return a reconcile.Request accordingly.
  3. Modify the existing predicate as needed (e.g reading labels instead of name)

What I like about this resource naming change is that we'll get a more compact result and won't have any logic for the name itself (or promises). This also avoids some corner cases such as trying to create a resource with a name that is too long.

With these changes, the Reconciliation won't change too much:
fetch status for service -> list existing resource (by the relevant node+service+namespace labels) and create/update/delete as necessary.

wdyt?

@lwabish
Copy link
Contributor Author

lwabish commented Apr 2, 2024

I thought about this for a while and might have a reasonable alternative that should work in theory. Taking what you said about understanding stuff from the resource's name being problematic I suggest we try a slightly different approach:

  1. Instead of using a predefined resource name, we opt to using some generated name (such as GenerateName: <svc-name>- / l2- / etc.)
  2. We make sure that reconcile requests are raised for servicename+namespace instead of for the l2status resource name. That will require slightly modifying the current eventHandler (EnqueueRequestsFromMapFunc) you wrote, and replacing the current For(&v1beta1.ServiceL2Status{}) with another eventHandler approach Named("servicel2status").Watches(&v1beta1.ServiceL2Status{}, handler.EnqueueRequestsFromMapFunc(...) . In the new eventHandler we can look at the object's labels to understand which service name + namespace is relevant for the resource (this probably means we add a service namespace label as well) and return a reconcile.Request accordingly.
  3. Modify the existing predicate as needed (e.g reading labels instead of name)

What I like about this resource naming change is that we'll get a more compact result and won't have any logic for the name itself (or promises). This also avoids some corner cases such as trying to create a resource with a name that is too long.

With these changes, the Reconciliation won't change too much: fetch status for service -> list existing resource (by the relevant node+service+namespace labels) and create/update/delete as necessary.

wdyt?

Sounds good. Let me implement this and see if there is any further promblem.

@lwabish
Copy link
Contributor Author

lwabish commented Apr 7, 2024

@oribon Please take a look at the latest implement

Copy link
Member

@oribon oribon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot for this! overall looks good, still need to go over the e2es.

is there a reason you went with the "%s-%s-%s" format instead of a generated name for the resource? I'm concerned the names will be too long (or just messy in a -o wide), wonder if you hit something that you didn't like

@@ -23,11 +23,16 @@ spec:
- args:
- --port=7472
- --log-level=info
- --enable-l2-service-status=true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can flip the flag to default to true instead. Also if it's not there already can you add a knob for helm please?

e2etest/go.mod Outdated
Comment on lines 5 to 9
replace (
k8s.io/kubectl => k8s.io/kubectl v0.29.3
k8s.io/kubelet => k8s.io/kubelet v0.29.3
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something similar to this has occured last time: #2198 (comment)

Further information is,while I am using Jetbrain IDE developing, it runs go list -m -json all to obtain all dependencies at project loading, I guess.

Without the replaces, if the project requires some kubernetes code whose version is set to v0.0.0, the go list command would fail.

So this is something like a work around I think, but not sure if this is the best way.Also, I am a little confused why this problem is not triggerd in github ci env

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it's a bit odd, can you try removing it from the commit and see what happens in ci?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it's a bit odd, can you try removing it from the commit and see what happens in ci?

Ci will work fine because this actuallly doesn't affect go mod tidy. Maybe I can put this in a temporary changelist to avoid IDE error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if you go get those deps instead of replacing? Replacing pins the deps and it will require manual updates (plus, we made a considerable effort to avoid doing that), with require we can rely on dependabot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oribon I wonder if we should enforce go list -m -json all to work on CI, as we want to have a smooth dev experience (not only to those using vim :trollface: )

Speaking of this, you can take a look at #2349 , which is a bug related to dev experience

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has still to be replaced with require, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has still to be replaced with require, right?

Seem right. Without replaces go list still reports errors

❯ cd e2etest
❯ go list -m -json all
go: k8s.io/kubectl@v0.0.0: invalid version: unknown revision v0.0.0
go: k8s.io/kubelet@v0.0.0: invalid version: unknown revision v0.0.0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even with require, you mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even with require, you mean?

yes

internal/k8s/k8s.go Show resolved Hide resolved
@@ -280,10 +280,16 @@ func New(cfg *Config) (*Client, error) {

// metallb controller doesn't need this reconciler
if cfg.EnableL2Status && cfg.Layer2StatusChan != nil {
selfPod, err := clientset.CoreV1().Pods(cfg.Namespace).Get(context.TODO(), os.Getenv("METALLB_POD_NAME"), metav1.GetOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should read the env var in the speaker's main.go (where the other env vars are read) and propagate it

state.Status = desiredStatus
err = controllerutil.SetOwnerReference(r.SpeakerPod, state, r.Scheme())
if err != nil {
level.Warn(r.Logger).Log("error setting owner ref", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not return an err here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was considering maintaining owner reference as an extra function in addition to the core function, rather than a part of the core function.
So I didn't do a fast fail when facing errors about owner reference.

statusObjFetcherFunc := func() ([]v1beta1.ServiceL2Status, error) {
statusList := v1beta1.ServiceL2StatusList{}
err := k8sClient.List(context.TODO(), &statusList,
client.InNamespace(speakerNamespace),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make this in all namespaces (just so we're sure we aren't creating unnecessary ones)

return err.Error()
}
if len(statuses) != 1 {
return ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: have an fmt.Errorf(...) explaining what happened

if len(statuses) != 1 {
return ""
}
if len(statuses[0].OwnerReferences) != 1 {
return ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, also can you verify the owner ref is the pod?

return err.Error()
}
if len(statuses) != 1 {
return ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

if len(statuses) != 1 {
return ""
}
if len(statuses[0].Status.Interfaces) == 0 {
return ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same + missing owner ref validation

@lwabish
Copy link
Contributor Author

lwabish commented Apr 9, 2024

thanks a lot for this! overall looks good, still need to go over the e2es.

is there a reason you went with the "%s-%s-%s" format instead of a generated name for the resource? I'm concerned the names will be too long (or just messy in a -o wide), wonder if you hit something that you didn't like

Considering the name of the cr, I have not paid much attention to it so I just chose this because it first came out to my mind. We can decide the final name for crs later. Current logic has removed the dependency to cr names so I think this is a reletively independent part

@@ -270,6 +270,7 @@ speaker:
enabled: true
# ignore the exclude-from-external-loadbalancer label
ignoreExcludeLB: false
layer2Status: false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: wdyt about calling this enableLayer2ServiceStatus and default to true?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we are confident with the implementation, this should not be optional. I added the feature flag only because then we could release a metallb version.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should even rollback the two commits from #2312

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually like the idea of keeping the option to disable if someone really wants to (for the sake of "performance impact" or w.e) while defaulting to enabled, wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather have it and that's it. If we want to leave the code that disables it and the parameter, fine (even though it comes with a complexity cost, even if little), but I would hide the parameter from helm until we realize this doesn't scale well (and even if it does, maybe it will be better to optimize it rather than allowing users to disable).

speaker/main.go Outdated
Comment on lines 89 to 91
// TODO: we are hiding the feature behind a feature flag because of https://github.com/metallb/metallb/issues/2311
// This flag can be removed once the issue is fixed.
enableL2ServiceStatus = flag.Bool("enable-l2-service-status", false, "enables the experimental l2 service status feature")
enableL2ServiceStatus = flag.Bool("enable-l2-service-status", true, "enables the experimental l2 service status feature")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove the comments, also I'd say this is not experimental once we default to true

level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", evt)
// unify req.Name to svcName-nodeName
level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", object)
label := object.GetLabels()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I mean is doing the same as before + what you added to the predicate: verifying the object is an l2status, that the labels aren't nil etc. (I guess being validated by the predicate is enough, this is just to be on the safe side)

e2etest/go.mod Outdated
Comment on lines 5 to 9
replace (
k8s.io/kubectl => k8s.io/kubectl v0.29.3
k8s.io/kubelet => k8s.io/kubelet v0.29.3
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it's a bit odd, can you try removing it from the commit and see what happens in ci?

return ""
}
return ss[0].Status.Node
}, 5*time.Second).Should(Equal(speakingNode))
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also add validation that the status is deleted after the service is deleted? (I think it's not there, ignore this if I'm wrong)

@lwabish
Copy link
Contributor Author

lwabish commented Apr 17, 2024

@oribon pls take a look agian

Copy link
Member

@oribon oribon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few nits, overall this looks good from my side.
Regarding the object's name I'm ok with the current approach but I think we should opt for some generated name "l2- / NODE-NAME- / etc." as I believe a node-service-namespace format is a bit cumbersome. If we go that route we should also make sure the service's name/namespace is visible when running a kubectl get -o wide.

Also, can you organize the commits as we did last time?

I'll be away for the next ~2.5 weeks, thanks a lot for your contributions with this feature we really appreciate it!

/cc @fedepaol

}
return err
}, 2*time.Minute, 1*time.Second).ShouldNot(HaveOccurred())
Expect(l2Statuses).To(HaveLen(1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change? I think keeping it in the eventually is clearer (returning an error with what we got if it's not 1)

}
return err
}, 2*time.Minute, 1*time.Second).ShouldNot(HaveOccurred())
Expect(l2Statuses).To(HaveLen(0))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same (have it in eventually)

return ctrl.Result{}, nil
}

func (r *Layer2StatusReconciler) SetupWithManager(mgr ctrl.Manager) error {
p := predicate.NewPredicateFuncs(func(object client.Object) bool {
if s, ok := object.(*v1beta1.ServiceL2Status); ok {
return strings.HasSuffix(s.Name, r.NodeName)
return r.NodeName == s.GetLabels()[LabelAnnounceNode]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a verification that the labels aren't nil

level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", evt)
// unify req.Name to svcName-nodeName
level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", object)
label := object.GetLabels()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is still the same since last time

// test service is located in testNamespace
// speaker pod is deployed in speakerNamespace
// the layer2 status crs are maintained in speakerNamespace
err = func(names ...string) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: having this as a loop over []string{testNamespace, speakerNamespace} is more readable imo

@fedepaol
Copy link
Member

I'll be away for the next ~2.5 weeks, thanks a lot for your contributions with this feature we really appreciate it!

/cc @fedepaol

@lwabish ping me when you adjust the commits and fix the last issues and I'll move forward from here, you are in good hands!

@lwabish lwabish force-pushed the fix-2311 branch 6 times, most recently from 2aba615 to 808e8f4 Compare April 20, 2024 04:32
@lwabish
Copy link
Contributor Author

lwabish commented Apr 20, 2024

@fedepaol
I have rebased the commits. Now let me summarize what is left to be done from my view.

  1. Make sure the SetupWithManager function of Layer2StatusReconciler is implemented as we expected.
    Refer to this review and our discussion Fix #2311: change layer2status object namespace and add owner reference #2351 (comment)
  2. Do some adjustment to layer2 stauts object names based on generated name, which I will finish at the last step.

internal/k8s/controllers/layer2_status_controller.go Outdated Show resolved Hide resolved
internal/k8s/k8s.go Outdated Show resolved Hide resolved
internal/k8s/controllers/layer2_status_controller.go Outdated Show resolved Hide resolved
level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", evt)
// unify req.Name to svcName-nodeName
level.Debug(r.Logger).Log("controller", "Layer2StatusReconciler", "enqueueing", "object", object)
label := object.GetLabels()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment above. I think it does make sense to not validate against labels because some external actor could mess up with them. On the other hand, I'd (either here or in the predicate) at least log if the object is nto of the right type. Maybe even panic, as it is something that we are not expecting.

e2etest/pkg/status/l2status.go Outdated Show resolved Hide resolved
e2etest/go.mod Show resolved Hide resolved
charts/metallb/templates/speaker.yaml Show resolved Hide resolved
@@ -14,10 +19,10 @@ require (
github.com/prometheus/client_golang v1.19.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/sys v0.18.0 // indirect
k8s.io/api v0.29.3
k8s.io/api v0.29.4
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for these bumps?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are submitted with this: #2351 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fedepaol So Should I downgrade this dependency version?

e2etest/l2tests/l2.go Outdated Show resolved Hide resolved
e2etest/l2tests/interface_selector.go Outdated Show resolved Hide resolved
@fedepaol
Copy link
Member

fedepaol commented May 7, 2024

@lwabish as always, no pressure, just checking you are not waiting something from us.
I think there are a few unsolved discussions.

@lwabish
Copy link
Contributor Author

lwabish commented May 7, 2024

@lwabish as always, no pressure, just checking you are not waiting something from us.

I think there are a few unsolved discussions.

Oh,yes. I have been working on something about job switching and almost done.

I should be able to continue in the coming days this week🤓

@lwabish lwabish force-pushed the fix-2311 branch 2 times, most recently from d452347 to 1c31bce Compare May 9, 2024 08:08
@lwabish
Copy link
Contributor Author

lwabish commented May 13, 2024

@fedepaol Please take a look at the current implement. 😄

@fedepaol
Copy link
Member

@fedepaol Please take a look at the current implement. 😄

will do by eod, thanks!

svcName := strings.TrimSuffix(req.Name, fmt.Sprintf("-%s", r.NodeName))
ipAdvS := r.StatusFetcher(types.NamespacedName{Name: svcName, Namespace: req.Namespace})
serviceName, serviceNamespace := req.Name, req.Namespace
statusName, statusNamespace := fmt.Sprintf("%s-%s-%s", req.Name, req.Namespace, r.NodeName), r.Namespace
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use serviceName as the argument in the formatting string will make it clearer how the statusName is built with.
Also, @oribon had concerns about generating such name in this way with the risk of going over the maximum size. The alternative would be to use a generated name, but doing so would make the status - service / node binding implemented only with something immutable like labels.

If we do so, we'd risk to have somebody changing the label and leaving a rogue status around. A possible alternative would be:

  • generated name
  • node and service name as labels (for better discoverability)
  • node and service name explicit in the status part (to provide immutability of the information)

If we do that, we'll probably have to add an index as well so we can retrieve the entry easily (we won't be able to do that by name anymore).

Thoughts?

Based on the outcome of this discussion, we'll have to choose what to do with the predicate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by adding an index? Can you give me an quick example for better understanding?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something along the line of what we do here https://github.com/metallb/metallb/blob/main/internal/k8s/k8s.go#L246 where we set a key to retrieve the ep slice based on the service name

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for that idea!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try to tidy this up.
With completely generated names for crs, along with index which maintains the node->svc relationship, we would use something like this in predicate to determine if a cr belongs to a specific node(speaker), am I right ?

if err := cl.List(ctx, &slices, client.MatchingFields{epslices.SlicesServiceIndexName: serviceName.String()}); err != nil {
return nil, err
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, thats correct

e2etest/pkg/status/l2status.go Show resolved Hide resolved
In order to avoid dangling status resources, we add an owner reference from a speaker to the L2Service status it owns.

Because of that, we also move the instance to the MetalLB's namespace.

Labels are added to make it easier to understand which node / speaker is announcing a given service.

Signed-off-by: lwabish <wubw@pku.edu.cn>
Added validation for owner reference in l2status instance.

Validate l2status instance in metallb speaker namespace insdead of service namespace.

Signed-off-by: lwabish <wubw@pku.edu.cn>
We validate that owner reference is added to l2status instances correctly and

that the l2status instances are deleted correctly without dangling with the help of owner references.

Also, we validate the status instances in speaker namespace insdead of service namespace because of the latest

update of implement.

Signed-off-by: lwabish <wubw@pku.edu.cn>
…config

We use k8s downward api to expose speaker pod name as env variable.

Meanwhile, we update the rbac config to allow speaker pod to get speaker pods.

At last, `inv generatemanifests` were run to regenerate the manifests.

With these manifest modification, implement about owner reference can work in kubernetes cluster.

Signed-off-by: lwabish <wubw@pku.edu.cn>
Use k8s downward api to expose speaker pod name as env variable.

Update the rbac config to allow the speaker to get the speaker pod.

With these modification, metallb installation from helm can work correctly.

Signed-off-by: lwabish <wubw@pku.edu.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ServiceL2Status follow-up: create resources in main namespace with speaker pod as OwnerRef
3 participants