New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to the recommended
sets for 3.0.0
#1423
Comments
I'm fairly happy with the majority of what you've got here, but I do have three points:
I'm a fan of this on paper, but in practice it is very noisy, especially given that WebStorm now has type hints. I feel the primary annoyance I have is when I'm returning simple types in simple methods, or "Simple types in simple methods" is pretty arbitrary, and most of time it's when I'm relying on inference to handle an annoying type, so I don't expect there to be much to gain exploring that. However, I wonder if it's worth exploring allowing methods that return Ultimately, I know the primary thing I want it to catch:
Effectively: I definitely want complex return types, such as unions, or anything w/ a generic (so Promise, Array, etc) to be explicitly typed. While this might seem a bit unlikely, it's an easy mistake to make when working w/ highly dynamic languages such as Ruby (which is my works primary stack).
I'm interested in reading more about this, if you've got links on hand. I was under the impression that it would make the type more rigid, which can have a poison-well type effect, similar to how if you stick (Not saying it's bad - just interested about the pros vs cons of using it literally everywhere).
I'm not a big fan of having this in the recommended set due to it's highly opinionated history (I'm ok w/ having it around as an available rule). That's my 2cs - do with it as you will 🙂 |
This is why I suggested switching it for the upcoming I personally love explicit return types on everything for the reasons I stated (clear contracts, and easy to review). But I do understand that people, esp those who use frameworks like react/angular which have many void functions, dislike annotating every single method. This is why I suggested the new rule which will only require the module boundaries have explicit return types.
People have suggested this, but the problem is that it requires type information to do properly, which severely limits the usability of the rule, as many users don't use type information.
a readonly type is very different to the inferred type of a const variable. OTOH, using declare function acceptsSpecific(arg: 1 | 'foo'): void;
enum Foo { a = 1 }
{
const v1 = 1; // type === 1
const v2 = 'foo'; // type === 'foo'
const v3 = true; // type === 'true
const v4 = Foo.a; // type === Foo.a
}
{
let v1 = 1; // type === number
let v2 = 'foo'; // type === string
let v3 = true; // type === boolean
let v4 = Foo.a; // type === Foo
}
{
acceptsSpecific(1); // no error
const v1 = 1;
acceptsSpecific(v1); // no error
let v2 = 1;
acceptsSpecific(v2); // error - number is not assignable to 1
if (v2 === 1) {
acceptsSpecific(v2); // no error now that it's narrowed
}
}
Have you used our version of the rule, which uses type information? The base eslint version is limited as it solely relies upon single-file static analysis. Our extension goes one step further and inspects every return statement to check if it is a promise. I guess the rule name is wrong now, it's probably better to call it With our extension, IMO there's no more opinionated side to it - an async function must actually have code that is async in some form (even if it's a |
Disclaimer: I'm not actually using the recommended set. 🙃
|
One must always look at the eslint core team discussions through the lens of a typescript developer - they do not have types, and they cannot use types, so they rely on lint rules to validate their code. Reading through the linked issue, it seems like there is exactly one use case they say is valid: "I have a function which has no awaits and returns no promise to automatically wrap its return value in a promise". IMO this is very much an edge case, and it seems like a bit of a code smell; if I ran into an async function with no awaits and no promises, I would assume it is a bug; I wouldn't assume "oh this person wanted to auto-wrap their return in a promise". Unless ofc there is a comment explicitly stating that, but if you're putting in a comment to explicitly state that, then you're purposely circumventing codebase standards, so it would be a case for an Finally, any modern library should be using async/await itself, which means that more than likely this example is not something to worry about. If you await a non-promise value, the runtime will automatically wrap the non-async value in a promise, giving you the same benefits as if you had marked your function as
I would contest this statement. #636 can be worked around v easily by using the pretty standard practice of using arrow functions instead of pure methods. #743 isn't a bug, it's a feature request. The fact that passing a second argument to those functions isn't a defined language feature, it's an implementation detail of those functions. [].forEach(this.foo, this);
// instead do
[].forEach(this.foo.bind(this));
To be clear, I do like this rule as well! It helps you avoid useless code, and thus is a net perf win. With the 3.0.0 release I'm just trying to be a bit more conservative with the recommended sets, because I'm sick of having to have discussions about "recommending stylistic rules that don't catch bugs" 🙃. |
That's really interesting - I recently had this flagged on code a coworker of mine had written, as they were using TS for the first time, but having checked it out you (of course) correct: I'm not told I have to use Meanwhile I mean to do this:
But then you can remove the I definitely agree the rule is better than the eslint version, so I guess now I'm neutral about it 🤷♂ |
I'm going to enable this. I think that it'll do more good than harm in my case.
I thought that I was affected by #636, but I'm actually not. I'm also going to enable this.
It's a tricky one. I think that even though it's correct in many/most cases, it's likely going to cause a fair amount of noise in larger codebases that have not had it enabled. I'm interested to hear what other developers have to say about it. |
recommended
sets for 3.0.0
Strong favor of this one. I know
I'm biased, because I wrote the original rule; but I do really like this. A good chunk of the false positives will go away with the breaking change fix in #1163; but there will still be some false positives, (see below). Good This found more actual mistakes in our codebase than any linter rule in recent memory. Most of them were of them were dead code removal, not bugs; but we did find some legitimate bugs, too. For example, we had refactored some code from declare const something: Maybe<Something>;
// Bug: something is a maybe object, and thus always truthy.
if (!something) { }
// It should have been (or further refactored)
if(something.isNothing()) { } The above is essentially a more general version of the Mixed Object index types are a mixed bag. It's a common place of false positives; because a real common practice is to use types like const somethings: SomethingRecord = {};
function getSomething(key: string) {
if(somethings[key]) {
return somethings[key];
}
} I consider this a mixed bag, because this does encourage more "honest" types, like Bad Array indexing just isn't fun. It's the same issue as object indexing: array types claim that all possible indexes correspond with values, but in practice they don't. There's no good fix because I may try to see if we can adjust the rule to alleviate these false-positives. Certainly errors from stuff like
I'd be curious what TS APIs you're thinking of. But as it says, these are a pretty slim minority case: and even then I think having the explicit lint rule override is a good thing, as it signals that it's an intentional check and not a mistake that someone might "helpfully" "fix" later on. (And hopefully encourages the author to write a comment explaining the circumstances in which the value might be different than the typings claim).
Yeah, codebases with lots of extra type guards for the sake of JS consumers are a good example of a codebase where it might be a good idea to disable this rule. (Should add that to the readme section on "When Not To Use It"...) Or perhaps we could find some way of avoiding those false positives, too. Maybe an exception for conditions inside "assertion functions". (Which could either be identified via type; or perhaps just by considering any functions with names starting with "assert" as special cases) Do we have a rough timeline for the 3.0 milestone? I may take a shot at some of these potential enhancements, which might impact the viability of including this as a default rule. (Also some other breaking API changes to make) |
eg in this codebase, which was added due to a crash encountered by a user (#499): typescript-eslint/packages/eslint-plugin/src/rules/unbound-method.ts Lines 79 to 84 in 3a15413
Soon™ I was hoping by end of this month, but that's a very optimistic goal. Thanks for the commentary on For array and object index accessors, we could probably make the rule smarter here to help deal with some of these cases where the types are "correct", but you still want to defensively program, i.e. something like if (options.allowDefensiveIndexAccessorCheck) {
if (
type is arrayType &&
isPropertyAccessIndexAccessor
) { markAsValidCondition(); }
if (
type is objectType &&
typeIncludesIndexSignature &&
isPropertyAccessIndexAccessor
) { markAsValidCondition(); }
} |
I have a bad experience on the rule For example, we define an interface here at interface XXX {
/** docs and examples writen in JSDocs */
yyy?: (x: string) =>string
} Then if we make wrong return like below, TS will show an error: const x: XXX = {
yyy: x => Boolean(x)
} While this is fine, and don't need us to write type info. const x: XXX = {
yyy: x => `aaa${x}`
} So there is no need to write the return type again. Inforce us to write return types in this situation, in my mind, is annoying. Since we build our type declaration files just for additional controling, and reducing type defination work. This rule is obviously doing the opposite job. I agree that in other situation, writing return types instead of leting typescript infer one itself is a good habit. But disagree the rule in the situation I met. Also, I think that
this makes little advantage, since the expriences on Github and Editors like VSCode is really difference. Most people only use Gihub to review some code only extemporaneous. |
Have you tried the
This is the point though - github doesn't have intellisense, so the more type annotations you put in your code, the less time a reviewer has to spend jumping around codebases.. As a senior engineer, and open source maintainer, I know how important it is to make this process easier; I have had many working days where all I've done is code reviews via github. The less time I have to spend interrogating types outside the review interface, the faster, and better I can do reviews. I.e. if an engineer spending 1sec writing a return type saves me 30s in jumping outside the UI to a new file to check another signature, then that's a huge win for everyone. |
I'm mildly opposed to explicit return type lint rules, too. It's not a huge deal, as I can of course turn the rule off... but I will turn that rule off. I think restricting explicit type checking to module boundaries is an improvement, but I don't think fundamentally changes the dynamic. "Boilerplate"The obvious objection; a lot of type annotations, (especially primitives and I think my most salient objection is that this rule can be bad for the new-to-Typescript developer's experience. (I imagine this is the class most affected by "recommended" rulesets, because they're the least likely to feel confident in disabling "best practice"). In my experience a lot of JS developers trying TS feel like Typescript is wasting their time, and having a linter complain about not annotating the return of every function adds to this frustration, in a not insignificant way. In the big picture, a few return type annotations really isn't a meaningful amount of boilerplate in a significant project, but the perception of boilerplate may be more important factor. Again, though, restricting to module boundaries does help; (though it might introduce some confusion, too, as to why adding Not that helpful for reviewsIt can help with reviewing code outside of a development environment, but I do find that's only true in a rather narrow range of cases. Usually when I'm interested in the return type of a function, I'm interested in what the type is, not what it's called; which is almost always going to mean looking somewhere else in the codebase anyway. (Tangentially, I've started getting into using VSCode plugins to review PRs from the editor itself, and it's a fairly nice experience) Annotating return types can be non-trivialThere are various cases in which annotating return types is somewhat annoying. Wrapper functions around APIs that don't export their return type as its own type. Complex generic types can be tedious to annotate. Functions that return large objects heterogenous can be annoying here, too, like a large config object. You may end up just mirroring the entire shape of the object as the return type, which in some cases is just unnecessary noise. Dictionary-like objects with a long list of static keys can be annoying here too, a type like Annotating return types can be lossySometimes the typescript compiler does a better job with the return type than the human - specifically, the return types are sometimes looser than necessary. Consider: const getKey = (isFoo: boolean) => {
if(isFoo) { return "foo" }
return "bar";
} A beginner to TS, is likely to annotate this function as const messages = {
foo: "Foo!",
bar: "Bar!",
};
const key = getKey(isFoo);
// Element implicitly has an 'any' type because expression of type 'string' can't be used to index type (typeof messages)
// No index signature with a parameter of type 'string' was found on type (typeof messages)
console.log(messages[key]); Not a big deal, but something that could be avoided by just letting TS figure out the type. Lossy type signatures isn't just a beginner mistake, either. For example, we had an issue in our codebase like this: class Base { /* properties omitted */ };
class Foo extends Base { foo!: string };
class Bar extends Base { bar!: string };
const getStuff = (): Record<"foo" | "bar", Base> => ({
foo: new Foo(),
bar: new Bar(),
// etc.
});
getStuff().foo.foo; // Invalid because `foo` is `Base` not `Foo`; because the return annotation is lossy. (This turned out tricky because Example: React function componentsReact function components are a particular case where I find explicit function annotations are a bit annoying. They really don't communicate new information to the reader of the code. And it's not clear to beginners what the actual annotation should be. const Example = (props: ExampleProps): React.ReactNode => {
return <div></div>
}; EDIT: I'm wrong about Or to avoid this boilerplate, some will write their components as: const Example: React.FC<ExampleProps> = (props) => {} But this another example of a lossy type annotation and has some resulting disadvantages. Okay... long comment for a rule I'm "mildly" opposed to. I know these examples are all fairly specific and nit-picky, and of course "eslint-disable" exists for a reason. But I guess I'd reiterate the idea that explicit function annotations can be good for bigger teams and bigger projects and might be an "industry best practice", but is more of a wash for an average project, and can actually be a bit detrimental to the onboarding experience for Typescript. |
I mostly use the default config with only a few changes and there are a few additional rules that I always enable and I think they'd be a great addition to the default config.
|
I love this rule, because I use flow at work; which has this check built into the compiler (see Whilst I think this doesn't take much to get used to, I know a lot of people don't like it being enforced because it means you can never do shorthand checks on strings/numbers/booleans. I guess the question is how many bugs does it actually catch vs being stylistically explicit.
How did I forget about this rule? This would be great to add.
This is probably a good idea to add as well. I don't know why you wouldn't want it. |
Thanks for the advice, I tried, but still catching problems, it can prevent some of the errors, but not all of them. Some function in nested object is not working. I am not sure if it is a bug since you said it will solve my problem. in this case, I don't think I need to declare return type for ready |
Is there a reason for |
I commented on that in the OP, if you'd like to expand the sections. Also note the second paragraph of the OP:
|
@bradzacher what do you think about splitting |
Stylistic is somewhat opinionated sometimes. Though I've purposely trimmed this list down to only a few "stylistic rules". IMO; no empty functions isn't stylistic. Empty functions is a code smell, same as no unused vars. |
I agree that you shouldn’t have empty functions but I’ve found them useful in unit tests. I never had a need to lint for them because my team would never use them anyway in the non test code. |
You could handle that with overrides: overrides: [
{
files: ["*.spec.ts"],
rules: {
// Empty functions are used for mocks.
"@typescript-eslint/no-empty-function": "off"
}
}
] |
Thanks @glen-84! Yeah in that case the recommendation would be to use an override. There's lots of practices that are smelly, but "okay" to do in tests (like using The recommended set is setup around what you should to do in production code to ensure your code is correct. |
This comment has been minimized.
This comment has been minimized.
Would it be possible/reasonable for each config to extend the "parent" config? It seems like there's a pretty sensible hierarchy of:
It seems like in the majority of cases, someone using the I think it just looks a bit overwhelming for people when they look at the "recommended config for {
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended",
]
} It'd be great if this could just be written as: {
"extends": [
"plugin:@typescript-eslint/recommended"
]
} |
It does sound reasonable, but it makes the assumption that everyone wants to use every single config all of the time. Some people don't like to build on top of Additionally one of the (many) reasons we split out So there are tradeoffs to doing this, and I think the loss of flexibility is probably not worth it. That being said, I'm not opposed to creating an additional unified configuration to make this use case easier; I don't think providing more options to people is necessarily a bad idea - as long as it's thoroughly documented. |
These are good points, but I still think there's some value in having one or more of these configurations depend on each other. I suppose I'm really making three separate but related suggestions, which could be accepted or rejected separately, so let me actually split the points up and make my case for each of them. (... which I should have done in the first place)
|
I typically don't have problems with recommended sets of rules as they tend not to be opinionated. However, two rules seem not to be following this: specifically, For example, imagine you're creating an authentication context: export const AuthContext = React.createContext({
token: '',
setToken: (t: string) => {},
logout: () => {}
}); This code has two empty functions. Perhaps it's just me, and there actually is a better way of doing this with React, in which case I'd like to be corrected. Perhaps one could say that it's a design mistake of React, but here we have to agree that it is a very popular library, and thus the pattern will be rather common. |
I would have a utility function in my library that I would use for this: export const noop: (name: string) => (args: any[]) => void = (name) => (...args) => {
if (log) { log('no-op call', name, args); }
}; defined someplace. The class file would be replaced by a .release version with an actual noop implementation by a build script. then use it: export context AuthContext = React.createContext({
token: '',
setToken: noop('setToken'),
logout: noop('logout'),
}); |
This solution has its merit, but I wouldn't want to introduce it just to overcome the linter. |
I have conceded that whilst I think this is a good practice, not everybody thinks so due to its verbosity.
To start with let me just say that "rather common" is a very relative statement. Defining a react context itself is a small piece of a react app. I'd expect the ratio of context definitions to component definitions to be around 1:50, if not lower (a quick and naive search of the FB codebase puts it at around 1:40). For me as an engineer looking at that, I'd question how often you actually want to use a no-op function in a context. In production, 100% of the time, for sure! For example, your example is called So for me, I'd be looking to satisfy the linter here in a similar way to @bbarry by doing something like this: function contextCallingDefaultValue() {
if (__DEV__) {
throw new Error('Component was rendered outside a context provider');
}
}
export const AuthContext = React.createContext({
token: '',
setToken: (t: string) => { contextCallingDefaultValue() },
logout: () => { contextCallingDefaultValue() }
}); |
A simpler work-around to const noop = () => {/* do nothing */}; The point of this linter rule is to ensure that empty functions are intentionally empty, so adding a comment satisfies the linter that it's intentionally empty. I do often put |
By "rather common" I meant that a lot of applications will have at least one occurrence. The actual frequency of this pattern's usage inside each such application will be, of course, low. But I think even one occurrence matters. For me the litmus test of a rule in a recommended set would be: the code with no code smell should not have any linter errors/warnings and I shouldn't need to disable any rules to make it work. I can make exception for tests and Storybook files since those can sometimes afford to break certain rules and be imperfect. I also shouldn't have to modify the code just to make the linter pass where it's obvious that the current code doesn't cause any issues otherwise (unless it can be autofixed). Perhaps my expectations of what constitutes a recommended rule are simply wrong here, in which case I'm curious to learn what drives the selection. But it seems that this rule is the only one I've met so far that doesn't fall into this category ( @WayneEllery had a good point on this rule not actually being in the I like your example above, but note that you are talking about "satisfying the linter", which means the main reason we are doing this is for the linter, not to improve the code. I can also imagine contexts with an implementation that is completely valid and in no need for a fix. For example, an While I think empty functions can sometimes be a code smell, they are not necessarily one. @Retsam: it's an interesting solution, but again, falls under the "changing code only to make the linter work" clause. Also, if you use this workaround a lot, what is the purpose of having the rule enabled in the first place? Is it any better than having |
ESLint core tends towards being as unopinionated as possible, because otherwise they get way too much discussion around things. OTOH, we're working with TS codebases - which are much newer and (by and large) adopt much better, and more standardised styles and practices. That being said, we have tended toward being less opinionated with each version. The very first recommended config was based combining on some very all-encompassing configs provided by the community. Though we received a lot of discussion about that, so we have significantly pared it back over 2 major versions. You can actually see all of my reasoning for every single rule in the spoiler sections in the OP - I've tried to be as transparent as possible about this because of the amount of flack I've copped about the recommended sets in the past. When reviewing the recommended configs - I look at every rule and more or less go through the following two checks:
If yes, I put it in, and then the set is put up here for people to discuss it - the community uses these configs, and they should help shape them. I've marked a number of rules as things I'd love feedback on. If nobody comments, then I'll make an executive decision. By the time this merges it will likely have been up for 6 months, so if someone didn't voice their concern before then, then "tough luck" - it can wait for the next major 😅. I'll counter your point of "recommended means not having to disable across the codebase" with one example I have run into several times across codebases. But by default, this rule also bans empty catch blocks. There have been a number of times where I've purposely swallowed exceptions (for example, Usually the way I satisfy it is by changing my code, eg: try {
mkdirSync('something');
} catch {
// mkdir throws if the dir exists but we don't care
} So for me, that rule isn't much different to |
@bradzacher I think it was a case of the "loud minority" for |
Similar to what I did for 2.0.0 (in #651), I'm putting forward the new recommended set ahead of time.
I'm looking for feedback from the community before we go ahead and make the changes.
First up, let me clarify that this set will be somewhat opinionated. It shall contain a small set of stylistic rules that I believe are a best practice based off of what I've seen in my career, and what the community has converged towards in the past.
eslint-recommended
See #1273
recommended
(rules without type information)Comments about the current config
"@typescript-eslint/adjacent-overload-signatures": "error",
"@typescript-eslint/ban-ts-ignore": "error",
ban-ts-comment
comment rule, which also bansts-nocheck
."@typescript-eslint/ban-types": "error",
"@typescript-eslint/camelcase": "error",
"@typescript-eslint/class-name-casing": "error",
"@typescript-eslint/interface-name-prefix": "error",
eslint:recommended
:naming-convention
(feat(eslint-plugin): add rule naming-conventions #1318)naming-convention
.naming-convention
is super flexible and powerful, so if it doesn't quite fit, it should be easy to reconfigure it to your liking."@typescript-eslint/consistent-type-assertions": "error",
no-angle-bracket-assertions
."@typescript-eslint/explicit-function-return-type": "warn",
noImplicitReturns
turned on, it can also help you catch code paths that don't return a value.explicit-module-boundary-types
."@typescript-eslint/member-delimiter-style": "error",
"@typescript-eslint/no-array-constructor": "error",
"@typescript-eslint/no-empty-function": "error",
"@typescript-eslint/no-empty-interface": "error",
no-empty-function
"@typescript-eslint/no-explicit-any": "warn",
any
with the existance ofunknown
andnever
."@typescript-eslint/no-inferrable-types": "error",
"@typescript-eslint/no-misused-new": "error",
new
/constructor
within classes and interfaces respectively."@typescript-eslint/no-namespace": "error",
"@typescript-eslint/no-non-null-assertion": "warn",
"@typescript-eslint/no-this-alias": "error",
this
is an old, old, old, and bad practice. You should just use arrow functions."@typescript-eslint/no-unused-vars": "warn",
"@typescript-eslint/no-use-before-define": "error",
"@typescript-eslint/no-var-requires": "error",
require
statements are always typed asany
, which is a type safety hole."@typescript-eslint/prefer-namespace-keyword": "error",
no-namespace
, but this rule just makes sure that if you do have to use it, you don't use themodule Foo {}
syntax, so it's clear what you're doing."@typescript-eslint/triple-slash-reference": "error",
"@typescript-eslint/type-annotation-spacing": "error",
"no-var": "error",
var
declarations are error-prone due and often hard to undersand due to scope hoisting. Should just uselet
andconst
.let
/const
tovar
for you if you are targetting an old runtime, so there's no reason to usevar
."prefer-const": "error",
const
.const
has recently been debated, but I think that it's better to use it when possible. Though I don't feel strongly in either direction."prefer-rest-params": "error",
arguments
is a non-typesafe way of accessing arguments. Using a rest param means you can strictly and clearly declare function inputs and requirements.arguments
if you are targetting an old runtime, so there's no reason to usearguments
."prefer-spread": "error"
.apply
, becuase you don't have to worry about manually specifying thethis
context.apply
if you are targetting an old runtime, so there's no reason to use.apply
.Comments about the new rules
@typescript-eslint/ban-ts-comment
ban-ts-ignore
@typescript-eslint/brace-style
@typescript-eslint/class-literal-property-style
@typescript-eslint/default-param-last
@typescript-eslint/explicit-module-boundary-types
@typescript-eslint/explicit-function-return-type
@typescript-eslint/naming-convention
@typescript-eslint/method-signature-style
@typescript-eslint/no-extra-non-null-assertion
no-extra-semi
, which is recommended in the base ruleset.@typescript-eslint/no-extra-semi
@typescript-eslint/no-non-null-asserted-optional-chain
@typescript-eslint/prefer-as-const
@typescript-eslint/quotes
TL;DR
recommended-requiring-type-checking
(rules with type information)Comments about the current config
"@typescript-eslint/await-thenable": "error",
await
s."@typescript-eslint/no-for-in-array": "error",
"@typescript-eslint/no-misused-promises": "error",
await
s."@typescript-eslint/no-unnecessary-type-assertion": "error",
"@typescript-eslint/prefer-includes": "error",
"@typescript-eslint/prefer-string-starts-ends-with": "error",
"@typescript-eslint/prefer-regexp-exec": "error",
"@typescript-eslint/require-await": "error",
"@typescript-eslint/unbound-method": "error",
Comments about the new rules
@typescript-eslint/no-base-to-string
@typescript-eslint/no-dynamic-delete
@typescript-eslint/no-floating-promises
@typescript-eslint/no-implied-eval
eval
-like methods is a security problem, and a type safety hole@typescript-eslint/no-throw-literal
@typescript-eslint/no-unnecessary-boolean-literal-compare
@typescript-eslint/no-unnecessary-condition
Case-in-point: the typescript API types themselves, which have a lot of things defined as non-nullable, but in practice are actually nullable in some cases. The types are non-nullable because it's the 0.1% case that it's nullable, so nullable types would pollute the TS codebase with unnecessary checks.
@typescript-eslint/no-unsafe-assignment
(feat(eslint-plugin): add rule no-unsafe-assignment #1694)@typescript-eslint/no-unsafe-call
@typescript-eslint/no-unsafe-member-access
@typescript-eslint/no-unsafe-return
@typescript-eslint/prefer-nullish-coalescing
@typescript-eslint/prefer-optional-chain
@typescript-eslint/prefer-readonly-parameters
@typescript-eslint/restrict-plus-operands
string + undefined
, which is definitely a bug.@typescript-eslint/restrict-template-expressions
null
etc is an obvious bug.@typescript-eslint/return-await
@typescript-eslint/switch-exhaustiveness-check
TL;DR
The text was updated successfully, but these errors were encountered: