Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incorrect data #66

Open
christiancadieux opened this issue Jan 23, 2022 · 5 comments
Open

incorrect data #66

christiancadieux opened this issue Jan 23, 2022 · 5 comments
Labels
bug Something isn't working

Comments

@christiancadieux
Copy link

in some cases, memory values for a node will not include the 'Mi' suffix:

10.145.197.168   42125m (75%)     148700m (265%)     221838Mi (82%)          416923Mi (154%)
10.145.197.169   45325m (80%)     121200m (216%)     62346Mi (23%)           180263Mi (66%)
10.145.197.170   14425m (25%)     37700m (67%)       45346Mi (16%)           100345Mi (37%)
162.150.14.214   13790m (24%)     45700m (81%)       39411368960000m (29%)   106336625408000m (78%)
162.150.14.215   13790m (24%)     39700m (70%)       38874498048000m (28%)   90767368960000m (67%)
162.150.14.216   16790m (29%)     42700m (76%)       46390690816000m (34%)   98283561728000m (72%)
162.150.14.217   12490m (22%)     39200m (70%)       38606062592000m (28%)   91841110784000m (68%)

In these cases, the report is wrong. need to change the logic here:
https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L356

for example, use more specific requestString and limitString so the code does not fall on the wrong unit:
Example: add requestStringM() and limitStringM() that only converts Memory units to avoid the problem::

func (tp *tablePrinter) printClusterLine() {
	tp.printLine(&tableLine{
		node:           "*",
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    tp.cm.cpu.requestString(tp.availableFormat),
		cpuLimits:      tp.cm.cpu.limitString(tp.availableFormat),
		cpuUtil:        tp.cm.cpu.utilString(tp.availableFormat),
		memoryRequests: tp.cm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   tp.cm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     tp.cm.memory.utilString(tp.availableFormat),
		podCount:       tp.cm.podCount.podCountString(),
	})
}

func (tp *tablePrinter) printNodeLine(nodeName string, nm *nodeMetric) {
	tp.printLine(&tableLine{
		node:           nodeName,
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    nm.cpu.requestString(tp.availableFormat),
		cpuLimits:      nm.cpu.limitString(tp.availableFormat),
		cpuUtil:        nm.cpu.utilString(tp.availableFormat),
		memoryRequests: nm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   nm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     nm.memory.utilString(tp.availableFormat),
		podCount:       nm.podCount.podCountString(),
	})
}
@robscott
Copy link
Owner

Hey @christiancadieux, thanks for reporting this! I'm not sure how soon I'll be able to fix this, but very open to PRs.

@robscott robscott added the bug Something isn't working label May 13, 2022
@cloud-66
Copy link
Contributor

The same. I wil try to fix this

@robscott
Copy link
Owner

Thanks @cloud-66!

@edrandall
Copy link

Seems to happen when the deployment requests / limits has been specified using M rather than Mi
eg. we have a deployment where it's been entered as:

        resources:
          requests:
            cpu: 500m
            memory: 500M
          limits:
            cpu: 500m
            memory: 500M

Then kubectl resource-capacity --pods displays that deployment erroneously compared to all the others:

NODE                                NAMESPACE                POD                                                               CPU REQUESTS   CPU LIMITS       MEMORY REQUESTS      MEMORY LIMITS

aks-core-35064155-vmss000000        aqua                     aqua-sec-enforcer-fsdev-aks-foresight-muse2-ds-kkg7r              500m (12%)     500m (12%)       500000000000m (3%)   500000000000m (3%)
aks-core-35064155-vmss000000        kube-system              azuredefender-collector-ds-n2kwh                                  60m (1%)       210m (5%)        64Mi (0%)            128Mi (1%)
aks-core-35064155-vmss000000        kube-system              azuredefender-publisher-ds-9jz77                                  30m (0%)       60m (1%)         32Mi (0%)            200Mi (1%)

@cloud-66
Copy link
Contributor

cloud-66 commented Feb 6, 2023

@christiancadieux @edrandall You should try, this issue was fixed by #71

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants