Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can/should we add privacy-respecting usage metrics? #4456

Open
CommanderStorm opened this issue Feb 5, 2024 · 11 comments
Open

Can/should we add privacy-respecting usage metrics? #4456

CommanderStorm opened this issue Feb 5, 2024 · 11 comments
Labels
area:core issues describing changes to the core of uptime kuma discussion

Comments

@CommanderStorm
Copy link
Collaborator

CommanderStorm commented Feb 5, 2024

Dear community,

I would like to discuss if adding privacy-respecting usage metrics is something we want to learn from and to inform our actions.

This discussion is based on two impulses

  • During FOSDEM, I watched a talk about "Privacy-respecting usage metrics for free software projects" by @wjt (GNOME-Contributor, EndlessOS).
    In the talk, he goes into why free software projects might want to collect data on how the software is used, and how this can be done in a privacy-respecting fashion.
  • Moving the server #4296 we discovered that

    the /version endpoint requests is about 1-2 requests per second, which makes us wonder how many Uptime Kuma instances are running all over the world.

Core requirements:

  • adding a pop-up for our users to decide if they would like to contribute their data to these metrics (=> "Informed consent")
  • Only tracking the users/system settings and system state, no users' behaviour.
  • Only collecting metrics in a privacy-centric way, such as the one suggested by @wjt or the ISRG via divviup
  • Only collecting metrics for concrete experiments/questions.

Implementing such metrics would have these concrete benefits:

  • during the v2.0 release, we focused a lot on improving performance for large (more than 500 monitors) deployments and reducing storage requirements.
    It would be valuable to know if our prioritisation of these features is correct ("Are we pandering to he 20% or the 80%?") to make sure that we are offering a good serivce to most of our users.
    Going on wild tangents which only matter to the minority of our users might not be the best use of maintainer time (despite optimisation being fun => sometimes nessesary ^^).
  • A large part in maintaining uptime-kuma is spent on both monitors and notification providers. It would be beneficial to know if one of these has significantly more users than others to help us prioritise PRs better.
  • how many users are using the proxy feature? Should implementing Notifications via proxy #616 be a priority?
  • how many users have are using Prometheus metrics? Should metrics-issues be a bigger priority?
  • how many users are using which language? (see discussion on translation quality in Translations Update from Weblate #4394)
  • Do people actively use the maintenance system? If yes, how many active/passively and how often? (=> existing UX, priority of improvements in this area, need for UI - Remove Monitor Pause Confirmation #2359 and other shortcuts)
  • Is the current incident system used? (=> existing UX, Timeline-based incident system #1253)
  • How many users are using groups? (=> existing UX, Selection of dependent monitors #1236)
  • If we push an update, how quick are users updating to it?

This would also have downsides:

I would especially appreciate feedback from the regular contributors (I apologize for the ping) @louislam, @chakflying, @Zaid-maker, @marco-doerig, @Saibamen, @Computroniks, @MrEddX, @AnnAngela, @cyril59310, @apio-sys

PS: I know that privacy is a charged topic, but please let's keep the discussion civil ^^

@AnnAngela
Copy link
Contributor

AnnAngela commented Feb 6, 2024

I personally fully support your idea, as long as it's effectively anonymized, and look forward to others' opinions.

@cyril59310
Copy link
Contributor

I think that collecting data anonymously on the usage of Uptime Kuma is a good idea.

@MrEddX
Copy link
Contributor

MrEddX commented Feb 6, 2024

Yes, the idea is undeniably good and would be of great benefit to the developers of the project. From privacy or ethical point of view, I think that the end user should be given the right to choose whether this feature is active or inactive, although the data is anonymous.

@ddevault
Copy link

ddevault commented Feb 6, 2024

Hard requirement: any data collection must be opt-in only.

adding a pop-up for our users to decide if they would like to contribute their data to these metrics (=> "Informed consent")

Perfect, but focus also on the "informed" bit: exactly what data is collected, for what purpose, and how is it used?

Only collecting metrics for concrete experiments/questions.

I agree with this requirement, but it's not well supported by your summary.

Implementing such metrics would have these concrete benefits:

For each of these supposed benefits, you need to make a clearer case in order to justify monitoring. "It would be interesting" is generally not good enough. For example:

Do people use the maintenance system? If yes, how many and how much?

Positing a particular kind of metric you want to track like this is a good start, but expand on it:

  • Why do you need to know this?
  • What answers to this question would you expect?
  • Are any or all of these answers actionable? What actions, specifically, will you take for each possible answer?
  • What is the minimum amount of metrics you need to collect in order to provide a useful answer to this question? Be specific, e.g. "to answer this question we will record every time someone visits /maintenance, then sum this figure and include it in the weekly metrics collection batch request to our collection server")
  • Are the minimum necessary metrics to answer this question possible to collect in an anonymous and reasonably privacy-respecting manner?

@ddevault
Copy link

ddevault commented Feb 6, 2024

Also be aware that adding these features is going to subject you to the GDPR. You will have to comply with it, which means things like having a publicly accessible data protection officer.

@CommanderStorm
Copy link
Collaborator Author

CommanderStorm commented Feb 6, 2024

@ddevault
I have reworded the two questions you had problems with.

You are correct, I think a public site explaining with the following content will be necessary

  • the collection methods and methodology
  • the running experiments (most of the time None, but with explanation of "What, Why, Duration of collection")
  • and past experiments (with the results, for transparency and for new users to make more informed decisions)
  • privacy policy + imprint

As for tooling, a tool like divviup by the ISRG might be a good choice.

adding these features is going to subject you to the GDPR

Actually, the GDPR only covers personally identifying data. Since I do not intend to ever store such data, such a data protection policy is simple. I have written them before and can do that again.
I would recommend you to watch the talk by Will linked above. He goes into plenty of details how this can be done in a manner which respects privacy.
When talking about "privacy respecting usage metrics", this is not the same as talking about "spyware" as you referred in https://github.com/orgs/meilisearch/discussions/162. When I talk about usage metrics, his is more nuanced and experiment based.
See Telemetry Is Not Your Enemy for an article why "Not all data collection is the same, and not all of it is bad"

As for the duration of collection, I would say this depends entirely on how our users' upgrade behaviour (likely different between major - minor - patch updates) works, as I think only via updates new metrics can be introduced or clientside-disabled.
If an experiment has ended, no data is collected and after analysis the data is deleted.
=> start an experiment, collect results, finish the experiment, publish a new version without said experiment

@CommanderStorm CommanderStorm added the area:core issues describing changes to the core of uptime kuma label Feb 8, 2024
@CommanderStorm CommanderStorm changed the title Discussion about privacy-respecting usage metrics Can/should we add privacy-respecting usage metrics? Feb 9, 2024
@Sh3nron

This comment was marked as spam.

@rezzorix
Copy link
Contributor

I am with @ddevault

If this is implemented then at max opt-in only and with very clear defined limited scope.

@mh166
Copy link

mh166 commented Feb 11, 2024

I'm also in favor of adding such telemetry as it will be very beneficial as you laid out nicely. Of course, I agree that it should be Opt-In only.

Thoughts on the implementation

When asking for the admin's consent, please let the initial message be short and concise. From personal experience: the more text there is, the more I suspect it to be because of corporate legal reasons. Therefore i suspect nothing good, am too lazy to read on and just decline.

  • To prevent this, I'd suggest just a short statement. Something like "We don't collect any personally identifiable data. Just general system parameters (like: version, number of monitors, type of monitors, number of notifications, ...) are collected. The dataset is anonymized and cannot be traced back to your system."
  • Below that I would like to see two links:
    1. "Click here to learn more" – which might either link to a documentation page or reveal a detailed explanation in-app (preferably).
    2. "View collected data" – which displays (in a user-friendly, human-readable way) the data that is being collected, to allow me to make an educated decision.

Thoughts on evaluating the results

Please keep in mind that while this data might help you to prioritize bug fixes or enhancements, new features should still be considered with a reasonable high priority: you cannot measure what is not there yet. 😉 To prioritize between several new features, the number of +1s, together with the number of duplicate issues may be a better indicator.

Another thing to remember when looking at the data: an apparently low usage might not necessarily indicate little potential but might as well show opportunities for UX improvements.

An example from personal experience: when I first started using Uptime Kuma, it was not very intuitive for me to find that there is a simple incident system included. I stumbled across it by accident, then did not remember how I found it in the first place and had to figure out again how I got there in the first place.

The reason: you can only add an incident if you created a status page and if you are visiting said page and if you are editing it right now. Should you not meet either one of these conditions, you may never know about it, because not even the documentation mentions this feature.

Therefore, in this example, a change in UX might also increase the usage of the feature and consequently any prioritization related to it.

@CommanderStorm CommanderStorm mentioned this issue Feb 17, 2024
16 tasks
@rezzorix
Copy link
Contributor

@louislam I am following the uptime-kuma journey since the very beginning and would be very interested to have your take on all this.

@Zaid-maker
Copy link
Contributor

I am 100% in the favor of adding this feature ☺

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma discussion
Projects
None yet
Development

No branches or pull requests

9 participants