Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Labels should be elaborated more #1576

Closed
cdrage opened this issue Feb 8, 2023 · 10 comments · Fixed by #1882
Closed

Docs: Labels should be elaborated more #1576

cdrage opened this issue Feb 8, 2023 · 10 comments · Fixed by #1882

Comments

@cdrage
Copy link
Member

cdrage commented Feb 8, 2023

The page here: https://kompose.io/user-guide/#labels

The label is confusing to read and process.

We should have a section (header 3 / ###) for EACH label explaining it with an example.

@cdrage
Copy link
Member Author

cdrage commented Feb 10, 2023

kompose.service.group is a pretty important label and unfortunatley not much documentation on that either.

We'll have to showcase and document each label and what it does (and the corresponding kubernetes yaml that it outputs too)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 10, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@AhmedGrati AhmedGrati reopened this Jul 10, 2023
@AhmedGrati
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@cdrage
Copy link
Member Author

cdrage commented Jan 24, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@sosan
Copy link
Contributor

sosan commented Apr 11, 2024

examples, opinions?

kompose.service.type

kompose.service.type label defines the type of service to be created. There are several types to choose: nodeport / clusterip / loadbalancer / headless

In this example kompose.service.type label is set to nodeport.

services:
  nginx:
    image: nginx
    build: 
      context: ./foobar
      dockerfile: foobar
    cap_add:
      - ALL
    container_name: foobar
    ports:
      - 8080:80
    labels:
      kompose.service.type: nodeport

Converted docker compose file exposes the service on a static port on each worker node.

---
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  ports:
    - name: "8080"
      port: 8080
      targetPort: 80
  selector:
    io.kompose.service: nginx
  type: NodePort

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: nginx
  template:
    metadata:
      labels:
        io.kompose.network/kompose-default: "true"
        io.kompose.service: nginx
    spec:
      containers:
        - image: nginx
          name: foobar
          ports:
            - containerPort: 80
              hostPort: 8080
              protocol: TCP
          securityContext:
            capabilities:
              add:
                - ALL
      restartPolicy: Always

kompose.service.nodeport.port

kompose.service.nodeport.port label defines the port value when service type is nodeport, this label should only be set when the service only contains 1 port. Usually kubernetes define a port range for node port values, kompose will not validate this.

services:
  nginx:
    image: nginx
    ports:
      - 9090:80
    labels:
      kompose.service.nodeport.port: 9090
      kompose.service.type: nodeport

converted:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  ports:
    - name: "9090"
      nodePort: 9090
      port: 9090
      targetPort: 80
  selector:
    io.kompose.service: nginx
  type: NodePort

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: nginx
  template:
    metadata:
      labels:
        io.kompose.network/kompose-default: "true"
        io.kompose.service: nginx
    spec:
      containers:
        - image: nginx
          name: nginx
          ports:
            - containerPort: 80
              hostPort: 9090
              protocol: TCP
      restartPolicy: Always

kompose.service.expose

kompose.service.expose label defines if the service needs to be made accessible from outside the cluster or not. If the value is set to "true", the provider sets the endpoint automatically, and for any other value, the value is set as the hostname. If multiple ports are defined in a service, the first one is chosen to be the exposed.

For the Kubernetes provider, an ingress resource is created, and it is assumed that an ingress controller has already been configured. If the value is set to a comma-separated list, multiple hostnames are supported. Hostname with the path is also supported.

For the OpenShift provider, a route is created.

In this example kompose.service.expose label is set to endpoints counter.example.com,foobar.example.com

Docker compose file:

version: "2"
services:
  web:
    image: tuna/docker-counter23
    ports:
     - "5000:5000"
    links:
     - redis
    labels:
      kompose.service.expose: "counter.example.com,foobar.example.com"
  redis:
    image: redis:3.0
    ports:
     - "6379"

converted to kubernetes deployment, service, ingress:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: redis
  name: redis
spec:
  ports:
    - name: "6379"
      port: 6379
      targetPort: 6379
  selector:
    io.kompose.service: redis

---
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: web
  name: web
spec:
  ports:
    - name: "5000"
      port: 5000
      targetPort: 5000
  selector:
    io.kompose.service: web

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: redis
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: redis
  template:
    metadata:
      labels:
        io.kompose.network/kompose-default: "true"
        io.kompose.service: redis
    spec:
      containers:
        - image: redis:3.0
          name: redis
          ports:
            - containerPort: 6379
              protocol: TCP
      restartPolicy: Always

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: web
  template:
    metadata:
      labels:
        io.kompose.network/kompose-default: "true"
        io.kompose.service: web
    spec:
      containers:
        - image: tuna/docker-counter23
          name: web
          ports:
            - containerPort: 5000
              hostPort: 5000
              protocol: TCP
      restartPolicy: Always

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    io.kompose.service: web
  name: web
spec:
  rules:
    - host: counter.example.com
      http:
        paths:
          - backend:
              service:
                name: web
                port:
                  number: 5000
            path: /
            pathType: Prefix
    - host: foobar.example.com
      http:
        paths:
          - backend:
              service:
                name: web
                port:
                  number: 5000
            path: /
            pathType: Prefix

@apocrx
Copy link

apocrx commented May 17, 2024

kompose.service.group is a pretty important label and unfortunatley not much documentation on that either.

We'll have to showcase and document each label and what it does (and the corresponding kubernetes yaml that it outputs too)

I am stuck using "fake" filtering, due to missing service labels

kubectl -n my-namespace logs -f -l io.kompose.service!=nonexistent

Is there a way to specify docker compose service labels to be available under kubernetes service labels?

kompose.service.group was not much of help.
The docker compose labels for deployment correctly translate to labels
But service labels get into Metadata.Annotations (as expected, based on the kustomize docs) and I can't find a way to add custom labels to services.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants