New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare OnCall for Unified Slack App #4232
base: dev
Are you sure you want to change the base?
Conversation
It's a modal on the alert group message, not mesasge action (in 3-dot menu).
Move slack installation & uninstallation code to separate functions to reuse betwen social-auth and chatops-proxy
…-install-multiple-responses' into handle_chatops_proxy_broadcast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users will need to disconnect/reconnect Slack to switch to the unified app? Or is there any migration/transition plan?
There are a few data representation changes in the slack scenarios (e.g. schedules), could that break interaction with previously posted messages?
Also, it would be nice to add some tests for the chatops_proxy additions? (auth, handlers)
@@ -436,9 +437,9 @@ def _alert_group_action_value(self, **kwargs): | |||
|
|||
data = { | |||
"organization_id": self.alert_group.channel.organization_id, | |||
"alert_group_pk": self.alert_group.pk, | |||
# eventually replace using alert_group_pk with alert_group_public_pk in slack payload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this comment can be removed now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, guess so
engine/apps/chatops_proxy/client.py
Outdated
@@ -32,6 +33,15 @@ class Tenant: | |||
msteams_links: List[MSTeamsLink] = field(default_factory=list) | |||
|
|||
|
|||
@dataclass | |||
class OauthInstallation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, but maybe this should be OAuthInstallation
? (OAuth is shoft for "Open Authorization")
) -> tuple[Tenant, requests.models.Response]: | ||
url = f"{self.api_base_url}/tenants/register" | ||
d = { | ||
"tenant": { | ||
"service_tenant_id": service_tenant_id, | ||
"cluster_slug": cluster_slug, | ||
"service_type": service_type, | ||
"stack_id": stack_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this value backfilled/updated somehow for already registered tenants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be. Probably I'll add a new endpoint on ChatopsProxy, allowing to PUT tenants and set their stack ids.
Currently it will be just ignored by ChatopsProxy, since new API is not released yet.
f"msg=\"ChatopsEventsHandler: Found matching handler {h.__name__}\" event_type={event_data.get('event_type')}" | ||
) | ||
h.handle(event_data.get("data", {})) | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could log if we got an event but there is no match?
@shared_dedicated_queue_retry_task( | ||
autoretry_for=(Exception,), | ||
retry_backoff=True, | ||
max_retries=100, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to have 100 retries, or 10 should be the same for this purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea behind 100 retries is to make sure we didn't miss any OnCall org creation. Initially it was endless, but me and @vadimkerr decided to make it some finite number.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates! 👍
LGTM
"payload": { | ||
# It's not actual payload we are getting from slack, just a placeholder | ||
"slack_id" | ||
"some_slack_id" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this sample data ok ^?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for noticing. missed a coma. Anyway, I went ahead and used a more accurate payload example from slack docs instead of placeholder
Thanks! Added more tests covering our slack installation. |
…rafana/oncall into handle_chatops_proxy_broadcast
This PR does a bunch of changes to prepare OnCall for Unified Slack App:
Minor changes: