Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do we decide what to add here? #74

Open
samuelcolvin opened this issue Jul 3, 2023 · 12 comments
Open

How do we decide what to add here? #74

samuelcolvin opened this issue Jul 3, 2023 · 12 comments

Comments

@samuelcolvin
Copy link
Member

samuelcolvin commented Jul 3, 2023

We need a formal and agreed way to decide what types to add to this package.

I would propose the following:

  • 10 upvotes on the issue for new types which do not require new dependencies
  • 30 upvotes on the issue for new types which require new dependencies

Comments which explain why a type would be useful count double.

What do you think?

Selected Assignee: @lig

@Kludex
Copy link
Member

Kludex commented Jul 3, 2023

The numbers proposed look fine to me. 👍

@yezz123
Copy link
Collaborator

yezz123 commented Jul 3, 2023

To streamline the process and maintain a sense of progress, I suggest starting by merging the types that are already available.

This step will allow us to consolidate the existing contributions and establish a solid foundation for the package, it also provides an opportunity to test and refine the implementation of those types based on real-world usage and feedback.

Once the existing types are merged, we can initiate a voting process for new type proposals, then we can make sure that the proposed types receive adequate support and justification 👍🏻

@lig
Copy link
Contributor

lig commented Jul 3, 2023

I'm trying to find some way of justifying proposed vote counts required. If we look in retrospect, we see issues having much smaller number of votes have find their way int like this one (2 upvotes at the moment) #15

On the other hand, having such a policy will result in increased number of likes for sure. Companies having a lot of employees will have an unfair advantage.

@samuelcolvin
Copy link
Member Author

@yezz123 i agree it makes sense to add the existing ones.

@lig I agree this system isn't perfect, but I think it's the least worst. We (the maintainers) can always add types we specifically want or think would obviously be helpful without meeting the threshold - like #15.

That being said, I might reduce the thresholds to 5 and 20.

We should also have a checklist of requirements, before allowing a PR, here's a first guess:

  • meets threshold
  • and 3rd party libraries appear to have good community traction, stability - we don't want to be left maintaining a library just to keep this package working
  • good candidate for pydantic validation, e.g. no need for IO etc.
  • anything else???

@lig
Copy link
Contributor

lig commented Jul 3, 2023

anything else???

  • 100% test coverage

Gets bonus:

  • Implements a well-known standard/format

No go:

  • Implements a specific case targeting a single library/application

@Kludex
Copy link
Member

Kludex commented Jul 3, 2023

  • 100% test coverage

This package already enforces 100% coverage, fyi

fail_under = 100

@kkirsche
Copy link

@lig I agree this system isn't perfect, but I think it's the least worst. We (the maintainers) can always add types we specifically want or think would obviously be helpful without meeting the threshold - like #15.

That being said, I might reduce the thresholds to 5 and 20.

I think an approach that you could use here would be to simply start with one of these thresholds and in whatever document or issue that the final information is in, include a last updated date and next review date. Maybe review if it's working in six months and then lengthen the duration between reviews of the process as it stabilizes.

I assume there are going to be similar considerations for other situations (though not explicitly the same), so this would also give you a consistent approach to situations where thresholds aren't clear while maintaining community transparency (if you want that for this specific type of question).

Either way, thanks for all you are doing!

@Kludex
Copy link
Member

Kludex commented Jul 27, 2023

Can we document this somewhere, and close this issue?

@treylav
Copy link

treylav commented Jul 28, 2023

  • 100% test coverage

May I ask on this note: are there any plans to add fake data generators such as mimesis or faker into tests pipeline? It seems to me that they can be useful for this project. Then passing several iterations of tests with random data can also become a requirement.

@Kludex
Copy link
Member

Kludex commented Jul 28, 2023

  • 100% test coverage

May I ask on this note: are there any plans to add fake data generators such as mimesis or faker into tests pipeline? It seems to me that they can be useful for this project. Then passing several iterations of tests with random data can also become a requirement.

No plans for such a thing.

@treylav
Copy link

treylav commented Jul 28, 2023

No plans for such a thing.

Should I try to promote it as an additional option for testing, suggest relevant PRs, etcetera?

@Kludex
Copy link
Member

Kludex commented Jul 28, 2023

I'm fine with not doing it, but I don't know others' opinions here.

@yezz123 yezz123 pinned this issue Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants