Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regroup output for better debug flow and add missing DBs #87

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

kvijai82
Copy link
Member

@kvijai82 kvijai82 commented Jul 1, 2022

Regroup the output into foundational & cp4waiops section to make it easier to debug issues. There were a few DBs that were missing as well that were added.

@kvijai82
Copy link
Member Author

kvijai82 commented Jul 1, 2022

What the output looks like:

image

image

Signed-off-by: Vijai Kalathur <kvijai@gmail.com>
Copy link
Member

@taylormgeorge91 taylormgeorge91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to have it such that the component columns are the same and are a shared printout?

Maybe if some of the custom columns mismatch but can be aliased we can use the following ones with --no-headers output to strip that and have htem appear under the same header. i think the main concern would be padding/spacing inconsistencies based on length of values.

At some point we need to circle back to this and start summarizing the output instead of adding more verbosity to it

Copy link
Collaborator

@siyer-123 siyer-123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed a few minor things when I ran the script on a BVT cluster, but I definitely like the new output flow! Feels more streamlined. Thanks Vijai!

Comment on lines 289 to 292
# Elasticsearch status
INSTANCE_ES=$(oc get elasticsearch iaf-system -o custom-columns='KIND:.kind,NAMESPACE:.metadata.namespace,NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status')
STATUS_ES=$(oc get elasticsearch iaf-system -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}')
printStatus "$STATUS_ES" "True" "$INSTANCE_ES"
Copy link
Collaborator

@siyer-123 siyer-123 Jul 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran the script on the latest BVT and encountered an error with elasticsearch:

Screen Shot 2022-07-05 at 3 45 42 PM

Same fix required here as for status's output.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in latest commit.

Comment on lines 142 to 145
# Elasticsearch status
INSTANCE_ES=$(oc get elasticsearch iaf-system -o custom-columns='KIND:.kind,NAMESPACE:.metadata.namespace,NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status')
STATUS_ES=$(oc get elasticsearch iaf-system -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}')
printStatus "$STATUS_ES" "True" "$INSTANCE_ES"
Copy link
Collaborator

@siyer-123 siyer-123 Jul 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran the script on the latest BVT and encountered an error with elasticsearch:

Screen Shot 2022-07-05 at 3 45 42 PM

Made the same comment for status-all's output -- same fix required here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. I think on those clusters there is an additional elastic operator that's installed. I will update to fully qualify it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in latest commit.

printf "${blue}${bold}Hint: log into your cluster with CLUSTER_ADMIN credentials and try again.${normal}

"
printf "${blue}${bold}Hint: log into your cluster with CLUSTER_ADMIN credentials and try again.${normal}"
Copy link
Collaborator

@siyer-123 siyer-123 Jul 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to add those returns back in. Here's what I see when I run all three commands while not in cluster-admin mode (This line for status-all is the one with the formatting issue -- the extra returns are needed following the blue text for this one):

Screen Shot 2022-07-05 at 4 34 27 PM

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kvijai82 wondering if we added these spaces back in.

@kvijai82
Copy link
Member Author

kvijai82 commented Jul 6, 2022

Would it be possible to have it such that the component columns are the same and are a shared printout?

Maybe if some of the custom columns mismatch but can be aliased we can use the following ones with --no-headers output to strip that and have htem appear under the same header. i think the main concern would be padding/spacing inconsistencies based on length of values.

At some point we need to circle back to this and start summarizing the output instead of adding more verbosity to it

Agree with the summary part. Started experimenting with a troubleshoot option that will need a bit more work.

Regarding removing some headers, will experiment a bit to see if I can atleast group some of them together based on the spacing to reduce the real estate of the output a bit.

@kvijai82
Copy link
Member Author

Three followups to this PR to be handled in the next couple of week with interns:

  • Refactor the code to avoid a lot of duplication we have in there today
  • The above will lead to making it easier to address the following two items:
    • Group headers to make output easier to consume
    • Loop through all instances of the kinds we list out today instead of assuming there is only one. It works for most cases today but will be go to make this more flexible.

@siyer-123
Copy link
Collaborator

siyer-123 commented Jul 22, 2022

@kvijai82 Just ran the updated script on a few clusters (my v3.4 test cluster + a BVT on main) and noticed this error message pop up in the OperandRequests section:

NAMESPACE   NAME         PHASE     CREATED AT
katamari    iaf-system   Running   2022-07-22T13:48:23Z

Error from server (NotFound): operandrequests.operator.ibm.com "iaf-system-common-service" not found
Error from server (NotFound): operandrequests.operator.ibm.com "iaf-system-common-service" not found


NAMESPACE   NAME                   PHASE     CREATED AT
katamari    ibm-aiops-ai-manager   Running   2022-07-22T13:43:53Z

Other than this, everything else looks good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants