You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When cyclomatic complexity is high, it's believed that code was written in "spagetti code" style. Indeed, when there are a lot "linearly independent paths" through function/method, it probably was a good indicator of a complex code - at least in 1976 when the metric was developed.
However, I think that nowadays, with modern code syntax, this is not an ultimate indicator. Consider the following example:
JSHint says that cyclomatic complexity number for this function is 11. Why? Because I use the handy syntax for fallback values all over the place. Is this a spagetti code? Definitely not. Is it hard to read? Don't think so. Most important, those statements are independent, if you change the order, nothing will change.
I understand - if we rewrite the code in old way like
this function will start to look ugly. But I can't think of a single reason why anyone would want to do this.
Would you consider reducing the complexity introduced by certain constructs (such as aforementioned one)? Or at least making another metric which would calculate complexity closer to real life?
The text was updated successfully, but these errors were encountered:
When cyclomatic complexity is high, it's believed that code was written in "spagetti code" style. Indeed, when there are a lot "linearly independent paths" through function/method, it probably was a good indicator of a complex code - at least in 1976 when the metric was developed.
However, I think that nowadays, with modern code syntax, this is not an ultimate indicator. Consider the following example:
JSHint says that cyclomatic complexity number for this function is 11. Why? Because I use the handy syntax for fallback values all over the place. Is this a spagetti code? Definitely not. Is it hard to read? Don't think so. Most important, those statements are independent, if you change the order, nothing will change.
I understand - if we rewrite the code in old way like
this function will start to look ugly. But I can't think of a single reason why anyone would want to do this.
Would you consider reducing the complexity introduced by certain constructs (such as aforementioned one)? Or at least making another metric which would calculate complexity closer to real life?
The text was updated successfully, but these errors were encountered: