New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] add minor branding guidelines to CONTRIBUTING.md
#21495
Conversation
I like that it's in
So for all those reasons, I'd suggest to keep it in I wonder if we might be able to be clever about the CI check to exclude just that section from the markdown when checking, instead of having to disable that the check of the whole For example, if we include a Lines 591 to 593 in ab9fb32
|
Fully agree with all your points @AliSoftware 😊 I just pushed some changes lmk if those are good enough 😃 Before and after enabling the "skip" for the branding guidelines: |
fastlane/Fastfile
Outdated
line.scan(/([_ `"])(#{Regexp.escape(tool.to_s)})\1/i) do |match| | ||
matched_tool = match[1] | ||
tool_is_written_wrong_case = matched_tool != matched_tool.downcase | ||
looks_like_an_env_var = line.match?(/[A-Z_]+_#{matched_tool}_[A-Z_]+/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That won't work for env vars that start or ends with the tool name, like MATCH_FILE
or UPDATE_FASTLANE
though.
Doing line.match?
within the line.scan do …
could also lead to false positives, in case a line contains both a part that matches the regex and one that contains an env var. Like:
You can configure Fastlane output using `FASTLANE_SKIP_CHANGELOG`.
In this case the first iteration of the do…end
block will match the first Fastlane
in that sentence, but looks_like_an_env_var
will be true because it will match the FASTLANE_SKIP_CHANGELOG
at the end of that line. And so you will end up not warning about the first match having a wrong case.
Finally, I think we should consider only allowing env vars that are ALL_CAPS in our exception of this.
In the end, I think the best way is probably to keep the first regex, but also capture any optional [A-Z_]*
before and after within that Regexp, and then test the value of those in your scan
block. Something probably like this:
line.scan(/([A-Z_]*)([_ `"])(#{Regexp.escape(tool.to_s)})\2([A-Z_]*)/i) do |prefix, delim, tool_name, suffix|
wrong_case = tool_name != tool.to_s.downcase
looks_like_an_env_var = (tool_name == tool_name.upcase) && delim == '_' && (!prefix.empty? || !suffix.emtpy?)
errors << "…" if !looks_like_an_env_var && wrong_case
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few lil suggestions, take em or leave em. Looks great and I love the tests! 😍
Co-authored-by: Aaron Brager <getaaron@gmail.com>
@getaaron your copy suggestions were much better, thanks! Applied both 😊 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of having the test for the lane be implemented that way instead of a proper rspec, but I'm also not sure if there's a better solution, short of implementing the lane as a method in the fastlane gem's lib/
code itself in order to have a spec for it (and then have the lane in the Fastfile
call that method)… 🤔
I thought about this before but it's not ideal since it shouldn't be exposed publicly to fastlane users, it's just an internal tool 😞 do you know of any way to do it like this but not expose it to the lib? 🤔 |
What about extracting the |
fastlane/Fastfile
Outdated
private_lane :test_tool_name_formatting_ensurance_lane do | ||
content = File.read("spec/fixtures/fastfiles/tool_name_formatting.txt") | ||
errors = find_tool_name_formatting_errors(content: content, path: "CONTRIBUTING.md") | ||
errors.select! { |e| !e.end_with?("❌") } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with this is that you're only checking if lines not ending with ❌ are not detected at errors (actually, talk about double-negation… had it wrong the first time I read that code and thought it was doing the opposite at first…), but you're not checking if all the lines ending with ✅ are detected as errors.
If you rewrite this as a spec (as suggested in my top-level comment from a couple of minutes ago), you should expect(errors).to equal(the_exact_expected_list)
, as opposed to do it similar to what you did it, only expecting that all errors
end with ✅ but without checking that you didn't miss one of them in the original fixture.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your suggestion for helpers (similar to plugins) makes sense! It's literally the only file we have in the helper folder, nice 🤓 so this can definitely be moved to rspecs and proper unit tests be written, but can you also help me understand what you meant in this message you sent above? I think the tests are covering all lines, whether they're successful or not 🤔 We do check if the ones with ✅ succeed (and they do), and we also check if the ones with ❌ fail (and they do). What's missing? (just trying to understand here, if we were to keep it, although we won't 😇 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well another way of seeing this is that there are 4 categories of lines:
- Lines that should be considered errors, and are properly flagged by
find_tool_name_formatting_errors
as such - Lines that should be valid, and are properly not flagged by
find_tool_name_formatting_errors
as errors - Lines that should be considered errors, but are not properly detected and returned by
find_tool_name_formatting_errors
- Lines that should be valid, but are incorrectly flagged as errors by
find_tool_name_formatting_errors
What we want is to ensure not only that cases 1 and 2 happen (i.e. our implementation work when it's supposed to) but also that cases 3 and 4 don't happen (i.e. that our implementation doesn't create neither false positives or false negatives)
In your code above, you only look at if the return value of find_tool_name_formatting_errors
(aka errors
), so you're only considering cases 1 and 4 (i.e. things that have been reported as errors
). And especially then you're raising a UI.error
if case 4 (the thing that was raised as errors shouldn't have been) happens.
But you're not checking case 3 at all, i.e. lines that should be considered as errors and should have been returned in errors
when you called find_tool_name_formatting_errors
, but weren't reported (in case our implementation of find_tool_name_formatting_errors
would be incorrect — which is what we want to test here after all)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! 🎯
Lemme know if what I implemented is what you had in mind @AliSoftware 😃 alternatively we could write lots of |
expected_errors = [ | ||
"fastlane tools have to be formatted in lowercase: fastlane in 'CONTRIBUTING.md:19': _FastLane_ makes code signing management easy. ❌", | ||
"fastlane tools have to be formatted in lowercase: fastlane in 'CONTRIBUTING.md:23': A sentence that contains a _FastLane_ keyword, but also an env var: `FASTLANE_SKIP_CHANGELOG`. ❌" | ||
] | ||
expect(@helper.find_tool_name_formatting_errors).to match_array(expected_errors) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep exactly what I had in mind. Because this not only ensures that what we expect to be reported as errors are indeed reported as errors, but also checks that what we don't expect to be reported as errors and expect to be valid exceptions… isn't accidentally reported as errors either. So this should cover all 4 cases I mentioned above.
Btw, now that we have this implemented as a spec
, I think it's worth removing the ❌ and ✅ at the end of each lines of your fixtures now, as they are not a needed hack anymore, and could accidentally mislead the thing being tested (e.g. imagine if our implementation of find_tool_name_formatting_errors
didn't check if the lines of CONTRIBUTING.md
started with "- ❌", but instead our implementation tested if include?("❌")
… we don't want those emojis at end of line to mud the waters here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about removing them but ended up leaving them (hence why I reworded the header of the fixture file), because I think it's still useful when we need to update this fixture in the future. I think the header explains all that so there's no "accidentally misleading" I think 🤔 unless devs don't read it. But you have a good point nonetheless, I'll remove them
I think I you replace That being said, that made me realize that it would be better in terms of memory-footprint to use So I think it would be better to:
|
@rogerluan I took the liberty to push 2 commits to implement my suggestions above (using |
c5956f5
to
3c4b46a
Compare
I don't mind at all, thanks for applying these @AliSoftware ! Great catch with inverting the loops, it's definitely more efficient this way (specially wrt the I think we can move forward with this PR now! 🤩 |
Also thanks for pushing the boundaries here 😉 we're in a much better place now, now just with some formalized branding guidelines, but better tools, test coverage, and we even made our docs more consistent 🚀 |
Teamwork at its best 🙂 I'll let you the honour of merging the PR if you're all good with it now 😃 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed again 👍 LGTM!
Checklist
ci/circleci
builds in the "All checks have passed" section of my PR (connect CircleCI to GitHub if not)Motivation and Context
This is already enforced by our CI checks:
fastlane/fastlane/Fastfile
Lines 585 to 607 in ab9fb32
But it's not documented anywhere (and I also I believe that CI check is not very thorough). This PR just formalizes those guidelines, and it's a starting point for us to improve/shape it better in the future (should there be any other additions).
I wasn't sure where to place this section. Idk if we'd rather have yet another markdown file, like
BRANDING.md
or something similar? 🤷♂️Testing Steps
Looks like this formatted:
You can preview it here: https://github.com/fastlane/fastlane/blob/rogerluan-branding/CONTRIBUTING.md