Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In defense of unary functions #233

Open
Avaq opened this issue Sep 27, 2021 · 19 comments
Open

In defense of unary functions #233

Avaq opened this issue Sep 27, 2021 · 19 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@Avaq
Copy link

Avaq commented Sep 27, 2021

I'm creating this issue to address a line of reasoning primarily put forth by @mAAdhaTTah and others to a lesser extent.

I've been following most of the issues in this repository (a lot of reading, I know) and although I agree with a lot of good arguments made recently in favour of Hack, one argument I see being made over and over again is one which implies that the curried, data-last, unary functions exist solely to fascilitate some kind of pipe operation. The reasoning then continues to favour the Hack proposal because "as opposed to the F# variant, it doesn't make people choose a calling style".

For example here (#215 (comment)), @mAAdhaTTah writes:

[library authors] provide them because curried functions work better with pipe

And the remainder of his argumentation rests on the statement above.

An example of the continuation of this line of reasoning can be seen, among many other places, here in @tabatkins comment:

the library author has to choose this calling convention at the time they write the function, and library users have to know which convention the function is expecting in order to call it correctly. -- #206 (comment)

While this is indeed true for the F# operator, what I'd like to clarify in this thread is that it's also true for the Hack operator, and thus somewhat of a non-argument.

The pipe operator is a tool for code linearization. If point-free programming is the tool to achieve that, fine -- #206 (comment)

Is another comment that I view as an argument within the same line of reasoning. The implication here is that point-free programming accommodates the pipeline operator, rather than the pipeline operator accommodating point-free programming. Here's some more examples of this line of reasoning:

People aren't designing whole libraries in curried / point-free style so they can use them with .map. -- #215 (comment)

Introducing a third calling convention, asking library authors to design their APIs for that calling convention, asking users to specifically choose this calling convention, will further cleave functional JavaScript from the rest of the community & ecosystem -- #206 (comment)

A point-free approach is a tool for writing code in [a linear] way. -- #206 (comment)

These comments seem to stem from a belief that there is no virtue to function currying other than to tackle the code linearization problem. I'd like dispel this belief and in doing so, provide a counter argument to this line of reasoning.


I am one of the "library authors" referred to in various comments about my supposed motivation for providing curried functions. I am the author of Fluture and a core maintainer for Sanctuary - both libraries that leverage unary curried functions very heavily.

My motivation for using curried, data-last, unary functions is not purely for code linearization. In fact, code linearization could be left out of this consideration altogether. Let's examine some of the reasons for sticking with unary functions without going into code linearization.

  1. Firstly, the unary function can be treated as data like none other. Let me elaborate:

    • The untyped lambda calculus shows that you don't need any data types other than the unary function to achieve Turing-completeness. Functions are the data.
    • Functional programmers make use of function combinators to modify their functions as data.
    • Unary functions can themselves act as Functors, Monads, and many other algebras. In fact, many of the combinators from the gist linked above are actually also valid implementations of various algebras for the Function data-type, and as a result are widely spread in the JavaScript ecosystem (in Ramda, fp-ts, Sanctuary, etc).

    Being able to treat functions as data enables a form of meta programming not easily achieved in environments where functions vary a lot in their arity.

  2. Secondly, data-last, curried functions encourage code reuse. A simple example is the definition of increment by (partial) application of a curried add function to 1. But of course this principle extends to functions of far greater complexity.

  3. Thirdly, and perhaps more generally, curried unary functions are an incredibly simple unit of abstraction. This last point is difficult to bring across but it goes a bit like this: When working with simple units of abstraction, the cognitive load from reasoning about the abstractions themselves is lowered, leaving more head space to deal with the complexities of the code you're editing. Having experienced both styles of programming quite heavily, I can only really vouch from my own experience for what a major difference this makes. To summarize this last point: another reason for using unary functions is to optimize for simplicity of abstraction.

  4. EDIT: I added a final reason (composability) a ways down this thread: In defense of unary functions #233 (comment)

The three reasons listed above don't go into code linearization, but are enough for me to favour this style and provide libraries which encourage this style. This means that the assumption that library authors would have chosen non-curried, data-first function APIs if it wasn't for the code linearization problem is, at least in my case, false.


So even without trying to tackle the code linearization problem, people like me already choose to be in a world of curried, data-last, unary functions. The idea that unary functions are a means to an end, where the end is "code linearization" is incorrect. A functional programmer will use data-last curried functions either way, and the pipe functions and eventual |>-proposal grew from a need to facilitate code linearization within an ecosystem that already favours curried, data-last functions.

Solving the code linearization problem within this context has proven problematic:

  • Variadic function composition functions such as Ramda's pipe and compose, and FP-TS' pipe and flow are impossible to type properly or fully in TypeScript.
  • It's also impossible to provide type information for these functions in Hindley-Milner (the type language typically used to document functional libraries) without inventing syntax.
  • The .pipe method used in Fluture and RxJS provide a solution for code linearization which, while not suffering from the problems listed above, is limited only to the data type it's implemented on.

Now this, in my perspective, is what the need for a binary infix function application operator (|>) stems from. Not to solve code linearization within the greater JavaScript ecosystem (which has other solutions, such as fluent method chaining), but specifically to address the code linearization problem within the growing functional programming in JavaScript "niche".

This means that arguments made from the position that "thanks to the Hack pipeline operator, functional programmers can now finally give up on the inconvenience of using unary functions" completely miss the point. Unary functions are not an inconvenience we deal with for linear code, but rather are at the core of the functional paradigm. With that in mind, I hope you can see that the idea that Hack doesn't make people choose a calling style, where F# does, is also wrong:

  • It's true that the F# variant puts non- functional programmers in a position where they have to choose between ergonomic code linearization through tacit style, or an idiomatic data-first non curried API (if we ignore fluent chaining as an option); however:
  • The Hack variant puts functional programmers in same position for making the opposite choice between ergonomic code linearization through argument first style, or an idiomatic (as I've tried to show) curried API.

A small note on the topic of this thread: I am aware of good arguments against implementing a feature in JavaScript that accommodates a code style used only within a specific niche. We don't need to go into that here as it has been discussed in many other threads. I specifically wanted to address a common line of reasoning that I've seen used across the forum which I believe to be false, and haven't seen properly addressed. I'm looking forward to either change the perspective of those using this line of reasoning, or to be corrected myself.

@mAAdhaTTah
Copy link
Collaborator

mAAdhaTTah commented Sep 27, 2021

I've been really busy over the past few days, and will continue to be, so I can't respond this in depth, but since I'm the primary individual making this argument, I did just want to drop a quick line. I will just add that I spelled out a chunk of this argument in this blog post as well, so just wanted to drop that reference for completeness. Otherwise, I appreciate the perspective and will give it a think & response when I get some time.

@tabatkins
Copy link
Collaborator

and the pipe functions and eventual |>-proposal grew from a need to facilitate code linearization within an ecosystem that already favours curried, data-last functions.

To the best of my knowledge, this isn't generally true. It definitely could be for some libraries (it appears that your libraries were written this way for this reason), but a number of libraries such as RxJS were written in this style because it was the best tool to achieve their goals within the existing JS syntax.

In particular, those goals were:

  • enable a fluent-programming API for repeated transformations on a value, similar to what libraries like jQuery allow
  • enable tree-shaking for the library; that is, allow the various operations to be import-able on an individual basis, so tooling can discard all the non-imported operations and prepare a minimal code bundle to send over the wire. (RxJS, for example, has a big library of Observable methods inherited from ReactiveX that any given bit of code is probably only using a fraction of.)

Under existing JS syntax, "fluent programming" requires the operations to be written as methods, but that requires them all to be stored on the class prototype, which means they aren't importable separately. "Tree-shaking" requires free functions so they're importable separately, but then using them is really annoying with stacks of nested functions rather than the nice linear code of a "fluent" API.

Using unary-returning functions and a pipe() function gives you the benefits of both styles, with relatively small downsides, so it's definitely understandable why this became popular in several libraries. But any solution that gives the same benefits for similar or less downsides would satisfy them just as well.


the library author has to choose this calling convention at the time they write the function, and library users have to know which convention the function is expecting in order to call it correctly. -- #206 (comment)

While this is indeed true for the F# operator, what I'd like to clarify in this thread is that it's also true for the Hack operator, and thus somewhat of a non-argument.

It is true that there is some symmetry, but it's important to look at the situations thru the lens of usage numbers, which breaks that symmetry.

Right now, there are two ways of defining and calling functions in "common" JS, that you'll see across the vast majority of written code:

// method style
Obj.prototype.foo = function(a, b) {...}
Obj.prototype.bar = function(c, d) {...}
obj.foo(1, 2).bar(3, 4);
// Can technically be called as a free function
// with Obj.prototype.foo.call(obj, 1, 2), but rare.
// Can be called in F#-pipe with `obj |> x=>x.foo(1, 2) |> x=>x.bar(3, 4)`

// free-function style
function foo(obj, a, b) {...}
function bar(obj, c, d) {...}
bar(foo(obj, 1, 2), 3, 4);
// Can't realistically be called in method-ish style.
// Can be called in F#-pipe with `obj |> x=>foo(x, 1, 2) |> x=>bar(x, 3, 4)`

Unary-returning is a new, third pattern:

function foo(a, b) { return obj=>{...} }
function bar(c, d) { return obj=>{...} }
obj |> foo(1,2) |> bar(3,4)
obj.pipe(foo(1,2), bar(3,4))
// Already looks similar to method-ish style.
// Can technically be called in free-function style 
// with `bar(3,4)(foo(1,2)(obj))`, but rare

Today, unary-returning is a rare pattern. It's used, as I said above, for some libraries that want tree-shaking and a fluent interface, getting the benefits of both methods and free functions. It's also used for some libraries that genuinely enjoy the benefits of HOF programming and have a library of functional combinators to play with. But the vast majority of JS written in the world does not write their functions in that style or call functions written in that style.

Choosing to base a pipeline operator on enabling and encouraging unary-returning functions is definitely possible, and would be the most direct translation for existing codebases using that style today. But we could get the same non-HOF-specific benefits by instead basing it on either of the other patterns: the "bind operator" proposal encouraged writing free functions that used this as the context value, and calling them like obj::foo(1,2)::bar(3,4); the current pipeline proposal encourages writing free functions but lets you invoke them in a method-ish style.

The existing two common defining/invoking styles already cause problems for authors sometimes. Learning isn't too bad, because "method-style" is so ubiquitous across languages that it's something one is virtually guaranteed to encounter and learn anyway. But usability suffers, because tooling written with the assumption of free functions doesn't work great with methods - arr.map(obj.foo) will often fail where arr.map(x=>obj.foo(x)) works, because you lose the receiver from the first example (the this value in foo() gets set to null, rather than obj). If a third pattern becomes common this will only become worse - libraries will be written expecting free functions, or expecting unary-returning, and authors have to adapt their code (or code from other libraries written in the other style) to match.

(A bind-operator would allow you to write functions in a somewhat familiar style, looking like a method, but would still involve invoking them in a third, new way - they're convenient with the bind operator but a little weird to call "normally", as foo.call(obj, 1, 2). So it still suffers from some of the "third way" problems.)

In languages with auto-currying, unary-returning and free-functions aren't distinct patterns; you write the function the same way, and it's merely a matter of whether the user calls the function with all of its arguments or leaves some off. So in Haskell, F#, etc, you don't have these third-way problems. But JS doesn't have that, and likely never will.

(Heck, langs like Haskell or Common Lisp don't even have the second way - their "methods" are just free functions with type signatures that cause the right version to be invoked on the right type of object. They use function composition or some variety of pipeline to get "fluent"-style syntax.)


So, wrapping this up:

With that in mind, I hope you can see that the idea that Hack doesn't make people choose a calling style, where F# does, is also wrong:

  • It's true that the F# variant puts non- functional programmers in a position where they have to choose between ergonomic code linearization through tacit style, or an idiomatic data-first non curried API (if we ignore fluent chaining as an option); however:
  • The Hack variant puts functional programmers in same position for making the opposite choice between ergonomic code linearization through argument first style, or an idiomatic (as I've tried to show) curried API.

This is indeed symmetrical, but the implication that it's symmetrical in effect is false, and that difference is what we're concerned about. The vast majority of code is "non-functional", and/or written in data-first non-curried style, including most particularly the entire Web Platform's API surface. Favoring that pattern over one that's used much less often isn't a neutral choice.

We could support HOFP more. But for it to be good for authors, we need a lot more support, at both the syntax and library level; until then it'll always be a less convenient syntax for JS authors. That additional support is unlikely to ever come in any significant way (much of it has to be built into the language in a pretty fundamental way, and the Web Platform's api can't just be swapped out for a HOF-focused version). One individual change in favor of HOFP won't significantly impact HOFP's usability, but it will make the change less useful to the web that doesn't use HOFP in a major way.

@Avaq
Copy link
Author

Avaq commented Sep 27, 2021

Thank you for the thoughtful and extensive response @tabatkins 😃.

what I'd like to clarify in this thread is that [forcing users to choose a calling style] also true for the Hack operator

It is true that there is some symmetry, but it's important to look at the situations thru the lens of usage numbers, which breaks that symmetry. [... T]hat difference is what we're concerned about. The vast majority of code is "non-functional", and/or written in data-first non-curried style, including most particularly the entire Web Platform's API surface. Favoring that pattern over one that's used much less often isn't a neutral choice.

This makes sense, and I suppose counters my argument regarding the symmetry. I guess what bothered me is that my perspective as a member of the minority which does prefer HOFP wasn't being reflected in most of the arguments being made.

Under existing JS syntax, "fluent programming" requires the operations to be written as methods, but that requires them all to be stored on the class prototype, which means they aren't importable separately.

I am aware of this problem, but I never saw the pipeline operator as a proposal which aims to solve it. Instead, the F# pipeline operator solves a very specific problem within the HOFP space, and somehow through what feels like scope creep became a solution to problems that exist in other spaces at the cost of no longer solving the original problem.†

But as you also said:

a number of libraries such as RxJS were written in this style because it was the best tool to achieve [the] goals [of] enabl[ing] a fluent-programming [style, and] enabl[ing] tree-shaking for the library

So maybe this "original problem", as I thought was being addressed by the initial pipeline proposal, isn't really as widespread as I imagined. In this case I guess it comes down to usage numbers again. On the other hand, I have seen a prevalent RxJS contributor (author?) mention that Hack isn't useful to them and assumed that they too were looking forward for this "original problem" to be addressed.

The existing two common defining/invoking styles already cause problems for authors sometimes. Learning isn't too bad, because "method-style" is so ubiquitous across languages that it's something one is virtually guaranteed to encounter and learn anyway. But usability suffers, because tooling written with the assumption of free functions doesn't work great with methods - arr.map(obj.foo) will often fail where arr.map(x=>obj.foo(x)) works, because you lose the receiver from the first example (the this value in foo() gets set to null, rather than obj). If a third pattern becomes common this will only become worse - libraries will be written expecting free functions, or expecting unary-returning, and authors have to adapt their code (or code from other libraries written in the other style) to match.

I think this is the strongest argument directly against F# that I've seen. The Hack operator doesn't encourage any new style of function definition, while still allowing functions to be called in a somewhat linearized way (I say "somewhat" because when reading code I find matching each topic reference to each pipe operator to be a very non-linear process). I guess the Hack operator allows the JavaScript community to continue to converge on a single function definition style, while the F# operator would cause further divergence in this space.

On the other hand, convergence further marginalizes the minorities (of which I myself am a part) that continue to favour HOFP. In the beginning, I was hopeful that the F# operator would contribute to the popularization of HOFP in JS, but instead it has turned into a feature which appears to aim for the opposite.†


† I apologize for the tone of these statements. What comes through is an expression of a sort of disappointment with the way that something that looked like it was really great news for devs like me, morphed into something which seems to actually be fairly bad news for devs like me. I think @js-choi captured that quite well in #215 though so no need to process that here 😉.


PS: When your response appeared, I was typing a whole piece about composability, what it is, and how it's another reason for choosing HOFP without using HOFP as a means to solve linearization. Based on your response, I don't think I need this piece any more to make my point, as my argumentation for favouring HOFP has not been questioned. If anyone's interested though, I can still refine it and then post it. 🤷

EDIT: I did end up posting it in the first section of this comment: #233 (comment)

@tabatkins
Copy link
Collaborator

On the other hand, I have seen a prevalent RxJS contributor (author?) mention that Hack isn't useful to them and assumed that they too were looking forward for this "original problem" to be addressed.

I'm not sure of exactly which comment you'd be talking about (I just got back from a week's vacation and have been quickly digesting the threads, but there's Just. So. Much. Text.), but at least in personal conversation with Ben Lesh he suggested that, had Hack-style pipelines been present at the time that RxJS were being written, they'd definitely have used them in the way I've been suggesting (free functions with "idiomatic JS" function signatures, taking the context object as the first arg). At this point switching over to them would mean a lot of churn, which is understandable, of course; from what I can tell the RxJS committee is currently not planning to make any changes in response to the pipe operator.

The Hack operator doesn't encourage any new style of function definition, while still allowing functions to be called in a somewhat linearized way [...]. I guess the Hack operator allows the JavaScript community to continue to converge on a single function definition style, while the F# operator would cause further divergence in this space.

On the other hand, convergence further marginalizes the minorities (of which I myself am a part) that continue to favour HOFP. In the beginning, I was hopeful that the F# operator would contribute to the popularization of HOFP in JS, but instead it has turned into a feature which appears to aim for the opposite.

Yup, correct in all regards.

(I say "somewhat" because when reading code I find matching each topic reference to each pipe operator to be a very non-linear process)

I wonder how much of this is due to reading examples of existing tacit libraries being naively adapted to use the pipeline, aka val |> foo(1,2)(^)? Those are indeed a little non-trivial to read, especially when the RHS is a longer operation or has an inline callback as an argument or something. When used on functions written in the more standard free-function style with the most important argument first, I think val |> foo(^, 1, 2) is plenty readable and linear, since it's identical to how you'd write the code un-pipelined: foo(val, 1, 2).

@js-choi js-choi added the question Further information is requested label Sep 27, 2021
@tabatkins
Copy link
Collaborator

tabatkins commented Sep 27, 2021

(Edit: @js-choi just accidentally deleted a big comment from themselves about their own experience as a Lisper with functional combinators and lambda, which was immediately above this comment and I was responding to. :( )

Yup, @js-choi's experience in Lisp mirrors my own. The big combinator I used a lot in Lisp was the pair of <> and >< I wrote that turn an unary list-accepting function into an n-ary function and vice versa (mnemonic was it "expanded" an arg into an arglist, or "compressed" an arglist into a single arg), but for everything else I just used lambda.

Yeah, some built-ins on the Function object would likely be unobjectionable if someone spent the time on them. The only one that might get pushback is Function.identity, since you can spell it today as x=>x, but having a pre-built one might still be worthwhile for microperf reasons if nothing else.

@js-choi js-choi added the documentation Improvements or additions to documentation label Sep 28, 2021
@mAAdhaTTah
Copy link
Collaborator

mAAdhaTTah commented Sep 28, 2021

If I may quickly (edit: ok, maybe not that quickly...) chime in to tie this last part back to the argument I've been making, the differences in usage of these tools is a consequence of the affordances of the language. Because Clojure has lightweight tools for making the changes to these functions with anonymous functions, function combinators are less popular.

Functional JavaScript's history reaches all the way back to pre-ES6, where anonymous functions were syntactically quite heavy, so the affordances of the language made those mathematical tools quite useful. A combinator like thrush or flip is syntactically lighter than wrapping a function manually in a function, so functional JavaScript adopted those combinators. With arrow functions, it's easy enough to write (a, b) => myLibFunc(b, a) that the value of flip goes down. Further, if you're composing some functions and normally you'd reach to one of these combinators to get it to line up with the pipeline you're trying to build, you can now put them in a pipe without them:

foo
|> funcThatCreatesAnotherFunc(^)
|> ^(bar) // thrush
|> baz(qux, ^) // flip
|> baz(^, qux) // or maybe this is flip? who knows, doesn't matter, put the args where you want!

Functional combinators are functions in math, but it's not a requirement that they be expressed in functions in the language. If we can accomplish the goals of these mathematical concepts with syntax, we should, especially when the resulting code is more idiomatic for the language at large. The preferred syntax to do that is a question of the affordances of the language. If you have to write function & return, it's preferable to use these combinators. If you can do =>, writing one-off arrow functions get more appealing. It also means if we agree that Hack pipe is more natural for the language in the mainstream, then we have to consider what a functional JavaScript would look like derived from that baseline. Relative to today, it may be less functional combinators, more arrow functions + pipe.

I guess the Hack operator allows the JavaScript community to continue to converge on a single function definition style, while the F# operator would cause further divergence in this space.

On the other hand, convergence further marginalizes the minorities (of which I myself am a part) that continue to favour HOFP. In the beginning, I was hopeful that the F# operator would contribute to the popularization of HOFP in JS, but instead it has turned into a feature which appears to aim for the opposite.

I think it's defeatist to view this as marginalization. The functional JavaScript community has been incredibly creative with its approach to adapting these functional techniques & tools in a language that is only somewhat hospitable to the style. Ramda; Sanctuary; Fluture; these are all incredible libraries that let you do wonderful things in JavaScript. The introduction of the Hack pipe is an opportunity to rethink functional programming in JavaScript and adapt our tools in new creative ways.

I recognize that this means churn in the functional ecosystem, and I appreciate that a lot of pain that will result as the ecosystem adapts to the new syntax and the style that results from it. I also think the result of this will be a better, more idiomatic, functional JavaScript overall and a resulting ecosystem that can share more of its tools & techniques with mainstream JavaScript. I'm excited to see where that goes.

@js-choi
Copy link
Collaborator

js-choi commented Sep 28, 2021

GitHub deleted my comment from yesterday, and I’m not sure why. I’m going to try to reproduce it…

Like @mAAdhaTTah, I like your post a lot. I think it cuts to the meat of some important issues.

I certainly agree that algebras based on unary functions are very elegant and have many elegant properties. And programming languages that happen to be based on auto-curried unary functions gain these elegant properties. I certainly did not mean to downplay this elegance when I co-wrote the explainer—any such downplaying would be a deficiency of the explainer.

One thing I’ve been thinking about is that my perspective comes from Lisps (especially Clojure). Like JavaScript, the Lisps usually are non-auto-currying, n-ary FP languages. After all, they’re based on lists.

In these Lisps’ non-curried n-ary functional programming, many of the function combinators in your Gist are simply not used often.

This doesn’t mean that tacit programming or other FP does not occur in these Lisps. Function composition is common. Constant functions are common. Monads are common.

But (in Lisps) we simply don’t use many of the other combinators in your Gist. I can’t remember the last time I’ve used flip in a Lisp, for example…I just use anonymous functions, and it’s clear enough.

Furthermore, many FP combinators do not necessarily act on unary functions. For example, monadic binding (with type MX → MY) is not a unary function. Monadic binding requires two arguments (the input value, with type MX, and the binding function, with type X → MY). The fact that curried languages use monadic binding in a curried style like Haskell’s m x -> (x -> m y) -> m y doesn’t change the fact that monadic binding is fundamentally non-unary…it’s just conveniently covered up by the fact that monadic bind is usually an overloaded binary operator like >>=. In a Lisp, we’d just make a binary function call (m-bind input f) as usual.

(Speaking of your Gist, I also am hoping to propose Function.compose, Function.identity, and Function.constant soonish. Unlike F# pipes and PFA syntax (which, as you know, have perennially run into strong concerns from outside the champion group), I think compose, identity, and constant do have good chances with TC39 accepting them to Stage 4. I’ll do my best.)

…Anyways, I appreciate this post. Hopefully my own Lispy perspective brings some color: many of these combinators over unary functions (as elegant as they are) are not required for functional programming, at least from my Lispy experience.

@Avaq
Copy link
Author

Avaq commented Sep 28, 2021

GitHub deleted my comment from yesterday

Huh. Indeed! I did get a chance to read your original comment ~10 hours ago, but have been too busy with work to respond. Thank you for reposting it. I really appreciate all the time and effort you all are putting into explaining your points of view. I'm tired now but I'll take it all in tomorrow and try to provide feedback.

@Avaq
Copy link
Author

Avaq commented Sep 29, 2021

libraries such as RxJS were written in this style because it was the best tool to achieve [the] goals [of] enabl[ing] a fluent-programming [style, and] enabl[ing] tree-shaking for the library

[but an] RxJS contributor (author?) mention[ed] that Hack isn't useful to them

I'm not sure of exactly which comment you'd be talking about (I just got back from a week's vacation and have been quickly digesting the threads, but there's Just. So. Much. Text.) -- @tabatkins

Haha, yeah. The amount of passion around this proposal is immense and the text just keeps flowing in. Anyway, I was referring to comments like "IMO, the hack proposal isn't useful enough to justify the additional syntax. #228 (comment)" and "Unfortunately, the direction that the proposal is going isn't particularly useful to the community I serve. #218 (comment)". From these comments I assumed a preference for the F# proposal, but that may have been a misinterpretation.


One thing I’ve been thinking about is that my perspective comes from Lisps (especially Clojure). Like JavaScript, the Lisps usually are non-auto-currying, n-ary FP languages. [...] But (in Lisps) we simply don’t use many of the other combinators in your Gist. -- @js-choi

Yup, @js-choi's experience in Lisp mirrors my own. The big combinator I used a lot in Lisp was the pair of <> and >< [...], but for everything else I just used lambda. -- @tabatkins

My background in functional programming, interestingly, is in JavaScript itself. I always had good intuition for HOFs (even back when I did mainly PHP, first via callables, then with lambdas) and when I discovered functional programming through Ramda it all just clicked and made a lot of sense for me personally to program in the HOFP style: As in, somehow it just aligns well with my way of thinking.

From there I started solving the problems I was encountering with this style of programming in JavaScript. The most painful thing about it was the error messages like: g is not a function; at f; at g; at f; at f. This problem spawned Fluture and got me into Sanctuary.

(Bear with me because I'm going somewhere with this)

Sanctuary is a library that adds a full blown runtime type system (heavily inspired by the Haskell type system) and monadic branching types (Maybe and Either; also Haskell inspired) to an otherwise "Ramda style" HOFP lib. The library now acts a bit like a stepping stone for developers coming from Haskell to JavaScript, or going from JavaScript to Haskell or PureScript.

Over the years, what is idiomatic in Haskell - where there is syntax support for do-notation, pattern matching, list comprehensions, etc - have had counterparts implemented in Fluture/Sanctuary. But because adding these through language features has not been an option, we've basically used functions for anything. For example, Sanctuary implements case analysis (or "destructuring") functions for every data type as an answer to pattern matching (array for deconstructing an Array, either for deconstructing an Either, etc, even boolean for deconstructing Booleans).

Where I wanted to go with all this is that through lack of language support, Sanctuary leans heavily on functions to implement language features, and being able to use function combinators reliably within this landscape, and to rely on functions always having the same shape (functions with one input, one output, and no side effects), is a big factor for the success of a HOFP code base and became part of the idiom.

In summary, the Sanctuary "function for everything" approach has brought about an "idiomatic HOFP in JS" style where function combinators are very common ground. Interestingly, although we've strived to "make JavaScript feel like Haskell" with this lib, when I actually program in Haskell I find myself missing Sanctuary's destructuring functions, list generator functions, and other Haskell-language-features-implemented-as-functions. It turns out that when you have those, and you have function pipelining, you really don't need anything else; And then switching from thinking about functions to some arbitrary language feature to do the job can even be jarring.

This also shows how much rests, from my perspective, on the F# pipe operator feature. It's something which would be used at the core of this style of programming. Of course, I know that the group consisting of those who program in this style is probably way too small to cater a language feature to. But I appreciate having been able to share my perspective and provide a window into a JavaScript sub-community that the committee possibly didn't know much about.


many FP combinators do not necessarily act on unary functions. For example, monadic binding (with type MX → MY) is not a unary function [...]. The fact that curried languages use monadic binding in a curried style like Haskell’s m x -> (x -> m y) -> m y doesn’t change the fact that monadic binding is fundamentally non-unary

While it's true that the binding operator itself can be non-unary, I was talking about unary functions being used as the operands:

 m    x  -> (x -> m    y) -> m    y  -- starting point
(m    x) -> (x -> m    y) -> m    y  -- added brackets for preserving grouping in next step
(i -> x) -> (x -> i -> y) -> i -> y  -- specialized in Function type by replacing occurrences of `m` constructor by function constructor `i -> _`

This is what I meant by "unary functions can themselves act as Monads": that there exists a valid monadic bind implementation for unary functions. Although, I just did some testing, and was unable to exclude the possibility that an equally valid bind implementation could be created for functions of other arities. In Haskell, non-unary functions are just unary functions that take tuple inputs, so the bind operation for unary functions automatically also works on non-unary functions. In JavaScript, non-unary functions would need a specific bind implementation for each arity. Anyway, to get back to the point:

many FP combinators do not necessarily act on unary functions.

many of these combinators over unary functions (as elegant as they are) are not required for functional programming

These are valid arguments against the reasons I gave for favouring unary functions. Since you're addressing my reasons, I do think I should add my fourth reason after all. The fourth reason is summarized as "unary functions compose". Sadly, I have misplaced the wall of text I wrote regarding composition. The gist of it is as follows though: Since functions only have one output, if you want to compose them together, you need something that only takes one input. I also made a case (in the text I lost) for why I think composability is important. I'd love to expand upon it but it's getting late again, so perhaps another day.


I'm going through comments in the order they were (originally, @js-choi ;)) posted. I still haven't gotten to @mAAdhaTTah comment, which I regret because they're the person I first addressed with this thread. I want to reply to their comment too and I will, but for now I'm off to bed. Thank you for reading. :)

@Avaq
Copy link
Author

Avaq commented Oct 2, 2021

On composability as a reason for favouring unary functions

I'd love to expand upon [composability] but it's getting late again, so perhaps another day. -- me, a few days ago

In my view, something "composes", in the general sense, when it can connect together to form a greater unit which in turn composes in the same way as its parts.

Click here to reveal how I view and explain composition. Feel free to skip this part if you're familiar.

Composition is an operation typically denoted using the symbol. It takes two units as input, and returns a single new unit of the same type.

A way I like to explain (function) composition sometimes is through the analogy of connecting different adapter cables together. For example, you can connect a USB-C to USB-2 adapter to a USB-2 to Ethernet adapter to create a new USB-C to Ethernet adapter:

If we would want to capture this operation with types, we'd need an Adapter type with two generic types describing the two sides of the cable. We composed Adapter<USBC, USB2> with Adapter<USB2, Ethernet> to get Adapter<USBC, Ethernet>. Crucially, this new adapter can be composed in exactly the same way with yet another adapter. For example one from micro-USB to USB-C:

If we captured this in types, we did Adapter<MicroUSB, USBC> ∘ Adapter<USBC, Ethernet> = Adapter<MicroUSB, Ethernet>. From the perspective of this composition operation, it didn't matter that the Adapter<USBC, Ethernet> operand was itself composed of smaller parts. Also, it doesn't matter which of the three cables I put together first. The following example produces an Adapter<MicroUSB, Ethernet> just as well. This last principle is associativity and is fundamental to the idea of composition.

This adapter analogy maps nicely into unary functions. If we replaced Adapter<I, O> in our types with Function<I, O>, you can already see how it works. The I becomes the function input and the O the output. I can also show a different type of adapter to draw parallels to non-unary functions:

After composing this adapter with another, you reach a point where it cannot be composed again in the same way you were composing adapters earlier. You'd need a different composition operator that somehow maps two outputs to two inputs.

When designing an abstraction of any sort, allowing consumers of that abstraction to compose two units of abstraction to form a greater one is a very desirable property, in my opinion. It allows the user of the abstraction to compose and decompose their program in any way they see fit, giving them a very comfortable level of control over the organization of their code. It also allows the program to evolve in a very elegant way, where any completed program can itself become a unit within an even larger program through composition. Think of a React app, where your root level <App /> component can be used as part of another component just like that. Or if you've ever used Parser Combinators to write a parser, you can just take your completed parser and use it as part of an even greater parser.

This brings me to unary functions. Unary functions have this same desirable property that they can be composed together. If you have a function with a specific output type, and another with the same type as its input type, then voila, you can create a function with the input type of the one, and the output type of the other, which can in turn be composed in exactly the same way.

Functions of greater arity don't share this property. Functions are inherently limited to having a single output, and so for them to be composed with other functions, the other functions must only take a single input.

Since unary functions compose, and unary functions can act as binary functions by simply returning another unary function, why would I ever give users of my abstraction a non-unary function? The only thing it achieves (besides the performance benefits, for now) is that it limits these users in their ability to define their program as a composition of the units that I gave them.

Responding to more comments

Right with that out of my system, I'll get back to the comments I wanted to respond to.

(Speaking of your Gist, I also am hoping to propose Function.compose, Function.identity, and Function.constant soonish. [...] I think compose, identity, and constant do have good chances with TC39 accepting them to Stage 4. I’ll do my best.) -- @js-choi

EDIT: The following response came out of misunderstanding of the scope of the proposal. See #233 (comment) for an updated response

Outdated response based on misunderstanding

Adding these combinators to the core JavaScript API, while serving as a friendly nod to the FP in JS community, and perhaps even as inspiration to try FP for a very small amount of people who might stumble onto them, wouldn't in my opinion necessarily present much benefit to the FP in JS community overall.

Firstly, these combinators are very trivial to define in userland: there is no lack of language features limiting anyone from doing so.

But more importantly, based on my observations of the TC39 process, consensus seems to often be reached via concessions. In this case, I'd be concerned that when these combinators are formally proposed, they'll be muddied by this process, and end up with "features" (or flaws, if you ask me) like not being unary, not just taking unary-function inputs, having special cases for async functions, etc. Formally adding combinators to the language with any of these flaws could once again work to the detriment of HOFP proponents.

So while I appreciate the idea, I'm very wary of the potential losses coming out of it, especially when compared against the potential gains - it essentially looks like a risky move to me, just as proposing the F# pipeline operator turned out to have been a risky move.


A combinator like thrush or flip is syntactically lighter than wrapping a function manually in a function, so [pre- arrow function] functional JavaScript adopted those combinators. -- @mAAdhaTTah

I'm sure that this could have contributed to the adoption of function combinators in the early days. But I hope that I managed to convey, with much of the writing above, that communities exist (of which I am a member) where function combinators are still used and appreciated even after the introduction of arrow functions. For us, it's often easier to recognize patterns like ap (pair) (inc) versus x => [x, x + 1].

Full ap (pair) (f) example
const ap = f => g => x => f (x) (g (x)) // also known as the substitution (or S) combinator
const pair = x => y => Array (x, y)
const inc = x => x + 1

// All below functions are equivalent
const f1 = ap (pair) (inc)
const f2 = x => pair (x) (inc (x))
const f3 = x => pair (x) (x + 1)
const f4 = x => Array (x, x + 1)
const f5 = x => [x, x + 1]

The combinatorial version ap (pair) (inc) uses no language features other than unary functions, and yet takes the same amount of characters as the syntax-heavy variant x => [x, x + 1] (even after I added spaces between function calls for readability). This is an example of how, from the perspective of a functional programmer embracing function combinators, language syntax doesn't add much, even post- arrow functions*.

However, I feel this last example is relevant to your next remark:

Functional combinators are functions in math, but it's not a requirement that they be expressed in functions in the language. If we can accomplish the goals of these mathematical concepts with syntax, we should, especially when the resulting code is more idiomatic for the language at large.

I fully acknowledge that the style of functional programming I've been demonstrating brings with it its own set of idioms which differ in many ways from common JavaScript idioms. I can't really provide a counter argument to this, and I understand why the committee felt that it's more appropriate to continue with Hack. I really only wanted to make you aware that the idioms that I've demonstrated do exists within some sub-community in the JavaScript ecosystem. Just for the sake of argument, and so that you'd think twice before reasoning from the perspective that curried, data-last functions exist within the ecosystem just to facilitate code linearization.

* Of course arrow functions themselves are hugely beneficial to FP, because we can only take the combinatorial approach so far before it becomes "too church-encoded to read" and because it allows us to define curried functions very succinctly. I don't mean to say that they aren't a useful language feature, but rather that they don't completely displace combinators.


convergence [on a calling style] further marginalizes the minorities that continue to favour HOFP. -- me

I think it's defeatist to view this as marginalization. [...] The introduction of the Hack pipe is an opportunity to rethink functional programming in JavaScript and adapt our tools in new creative ways. I recognize that this means churn in the functional ecosystem -- @mAAdhaTTah

While I appreciate the optimism, what you're suggesting, that the HOFP community "adapts" to the Hack pipe operator (presumably by converging on the same call style used throughout the greater JavaScript ecosystem), is more than just "churn". As I've tried to demonstrate, it would be a move away from the fundamental paradigm of higher order functional programming. A move that I can say with confidence many of the members of the community I serve would not want to make. Some within the community who feel less strongly about the benefits of HOFP might adapt, yes. But that's exactly what I mean when I say marginalization: while the community at large might converge, the HOFP community will experience a divide.

It is this principle that I've tried to make clear through this thread. That the Hack operator has the potential to cause a community divide just as much as the F# operator does. The impact of this divide is of course much smaller, because the impacted community is much smaller. But reasoning from the perspective that Hack creates no divide whatsoever, but just some "churn" in the HOFP community, feels like downplaying.

It's this feeling of being subjected to downplay that motivated me to create this thread in the first place. I would much rather be told "sorry, your community is too small to take into account" (as @tabatkins has done) than to be told "don't worry, we have a solution that makes everyone happy - you'll see.."!

@mAAdhaTTah
Copy link
Collaborator

@Avaq I've got lots more thoughts, but if you're still around, I've got one direct question: if you prefer const f1 = ap (pair) (inc) over const f5 = x => [x, x + 1], why would you use |> at all vs continuing with your current tools?

@Avaq
Copy link
Author

Avaq commented Oct 2, 2021

@mAAdhaTTah because of a limitation that is inherent to using braces in function calling style (f (x)) that makes solving the code linearization problem within this paradigm problematic. compose (f) (compose (g) (compose (h) (i))) (x) suffers from the nesting problem that |> tries to solve: x |> f |> g |> h |> i. The only limitation of using functions for everything is that we are subjected to this nesting problem. We need an infix operator that allows us to drop the braces to linearize our code*. I also touched upon this in my first post in this thread, under Solving the code linearization problem within this context has proven problematic.

* This could be an infix pipe operator, or an infix compose operator. Either would let us drop the braces and remove nesting.


EDIT: ...oh, unless you meant |> ^ (the Hack variant). In which case the answer is: I wouldn't (use Hack vs my current tools).

@js-choi
Copy link
Collaborator

js-choi commented Oct 3, 2021

Adding these combinators to the core JavaScript API, while serving as a friendly nod to the FP in JS community, and perhaps even as inspiration to try FP for a very small amount of people who might stumble onto them, wouldn't in my opinion necessarily present much benefit to the FP in JS community overall.

Firstly, these combinators are very trivial to define in userland: there is no lack of language features limiting anyone from doing so.

But more importantly, based on my observations of the TC39 process, consensus seems to often be reached via concessions. In this case, I'd be concerned that when these combinators are formally proposed, they'll be muddied by this process, and end up with "features" (or flaws, if you ask me) like not being unary, not just taking unary-function inputs, having special cases for async functions, etc. Formally adding combinators to the language with any of these flaws could once again work to the detriment of HOFP proponents.

So while I appreciate the idea, I'm very wary of the potential losses coming out of it, especially when compared against the potential gains - it essentially looks like a risky move to me, just as proposing the F# pipeline operator turned out to have been a risky move.

For what it’s worth, I’ve already added a proposal for unary Function helpers at Stage 0 to the agenda for the October TC39 plenary meeting. The explainer is still unfinished, but feel free to take a look at it and file issues.

Although standardizing Function helpers might indeed serve as a “friendly nod” to the FP-in-JS community, that would not be its primary goal. The primary goal would simply be to standardize some useful, common helper functions that get downloaded from NPM a lot.

Yes, compose, constant, identity, and pipe can—and are—generally be reimplemented as one-liners in userland. But standardization of these functions is still nice for clarity (e.g., function f (mapFn = identity) is arguably clearer than function f (mapFn = x => x)). Standardization would also help ergonomics (e.g., not needing to import a library just to compose some functions in a REPL). It may also be nice, to a lesser extent, for optimization. See tc39/proposal-function-helpers#1.

They’re included in the standard libraries of even n-ary non-auto-curried languages like Clojure for a reason, despite their being easily implementable in userland: because people do reach for them and would use them.

Indeed, I am hopeful that standardized compose and pipe functions would obviate some of the need for a standardized F#-pipe infix operator.

Yes, an infix operator would also be nice. But TC39 considers syntax to be very expensive, and F# is much more likely to standardize compose() and pipe() than the F# pipe operator (which I still do plan to support after finishing with Hack pipes). These are first steps.

As for your last paragraph’s concerns about TC39 muddying compose etc. if they get standardized, please do feel free to read the new proposal’s explainer, and file an issue if you see any problems with its proposed behaviors.

@Avaq
Copy link
Author

Avaq commented Oct 3, 2021

Speaking of your Gist, I also am hoping to propose Function.compose, Function.identity, and Function.constant

Adding these combinators to the core JavaScript API, while serving as a friendly nod to the FP in JS community, and perhaps even as inspiration to try FP for a very small amount of people who might stumble onto them, wouldn't in my opinion necessarily present much benefit to the FP in JS community overall.

I’ve already added a proposal for unary Function helpers at Stage 0 to the agenda for the October TC39 plenary meeting. The explainer is still unfinished, but feel free to take a look at it and file issues.

I've read the explainer and realize now that the scope of the proposal is not so much about adding function combinators to JavaScript core, and solving the problem of being unable to do combinatory style programming out of the box, but rather to add some of the common functional utilities, including pipe, that can be found in various libraries, and solving among others the code linearization problem.

The concerns that I raised were mostly about the "combinatorial logic" scope, and not the "functional utilities" scope. Consider my concerns alleviated. I hope I didn't come across too negatively. I'm actually quite happy to see these things potentially making their way into the language. So thank you :). I do have some other ideas and suggestions that I'll leave under the new proposal.

@mAAdhaTTah
Copy link
Collaborator

@Avaq To start, I want to say I appreciate you sharing your thoughts here. There's definitely an aspect to this that I had not considered and have been pushing F# adherents to provide, but given the discussion's overall reliance on more trivial examples, it was unclear to me why the syntax would be harmful to functional programming. This has been incredibly enlightening, so thank you for that.

I'll also say upfront that one of the things I'm getting from this, given your use of "HOFP", is that we're kind of addressing a "subset of a subset"; namely, that HOFP itself is a subset of FP in JS more generally, which is itself is one of several approaches to programming in JavaScript one can take, based on the problem being solved & stylistic preferences of the author. If this is wrong, or a misinterpretation, it'll severely undermine the rest of this post, so I want to acknowledge that up front as a potential weakness as well.

All of that said, you're probably right, there will be a split within the FP-in-JS community between those who take lighter/more syntactical approaches to FP (e.g. preference for x => [x, x + 1]) and those who still prefer the use of functional combinators as a solution to those problems (e.g. preference for ap(pair)(inc)). But I think what we're really at odds about are the goals of the operator in relation to functional programming. What I want, having done FP for a while and found it difficult to pair with non-functional code, is to help mainstream FP programming style & its technique, to make them available to wider audience. Looking at the ap(pair)(inc) example, the arrow function version is significantly easier for non-functional programmers to grasp, so heavily combinator-based code exacerbates the divide between functional writ large and mainstream JavaScript.

Part of what I expect to happen in a world with is that much of functional JavaScript, the part that "feels less strongly about the benefits of HOFP", can & will move its coding style to look more like mainstream JavaScript, and that, in turn, mainstream JavaScript can & will adopt some functional techniques in a more idiomatic fashion. As a result of this thread, I'd reframe my argument thus: it's not that there will be no divide between functional & mainstream JavaScript, but that the dividing line will shift to incorporate more functional techniques in mainstream JavaScript and thus contribute to the mainstreaming of functional programming in JavaScript more generally. That dividing line will shift into the community from which you're speaking, causing a divide between "mainstream functional" and "HOFP", the latter of which will be marginalized further.

This is what I mean by a difference in goals. I, personally, want to see functional programming and its techniques become more widespread. Insofar as HOFP has its own distinct idioms (ap(pair)(inc)), it is prevented from becoming more mainstream. I see Hack pipe as a step towards that goal because (based on my now-modified argument), it moves the dividing line between mainstream & functional JavaScript deeper into functional JavaScript land.

By contrast, my read is that enabling your current idioms more fluently is your goal for the F# operator. It is thus a feature of F# pipe that said dividing line doesn't shift because it solves a problem within, rather than between, the two communities. I might even go a step further: because F# pipe largely doesn't solve problems had by non-functional programmers, the introduction of F# pipe would actually exacerbate that divide, further siloing functional JavaScript from the mainstream.

Frankly, I think this is bad for JavaScript as a language, as a community, as an ecosystem. I think functional JavaScript has a lot to offer mainstream JavaScript and the further down the HOFP/curry/combinator rabbit hole you go, the more difficult it becomes to then adapt those tools in non-functional code. I think the React example (embedding <App /> in a whole other app) is particularly interesting because of how much React adopts from, but doesn't completely buy into, functional JavaScript. There was a lot of consternation about React Hooks and their lack of functional purity & reliance on effectively a mutable global behind the scenes, but they're inspired by algebraic effects, very much an FP concept that was then adapted for JavaScript in a way that felt natural to the language & the framework. I would like to see more things like this, and its my belief that we'll see more of that with Hack pipe.

So yeah, there will still be a divide, I was wrong about that, and again, I thank you for sharing that here. I will try to avoid minimizing the resulting pain, and hope that this provides some insight into why I think this is beneficial for mainstream JavaScript as well as some subset of functional JavaScript, even if your community is likely to be adversely affected.

@arendjr
Copy link

arendjr commented Oct 4, 2021

@mAAdhaTTah Thanks for pointing me to this comment. Your reasoning seems well-intentioned, but as I argued in #225, I do believe the proposal will face serious pushback from the mainstream JavaScript crowd as well. As such, you may even be risking creating a divide on both ends of the spectrum. I suppose it is possible there is a large-enough mainstream group in between the two sides to justify your course of action, but as someone who has never seen a Hack proponent "in the wild"... I'm still skeptical.

@fxladdercme

This comment has been minimized.

@tabatkins

This comment has been minimized.

@andezzat
Copy link

andezzat commented Nov 18, 2021

Can we just get a pipe, of any kind, tyvm :)
source: a coder who's written about 134915 implementations of pipe & asyncPipe and I'm sick of it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants