Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All the Maths #1

Open
mikeal opened this issue Aug 8, 2014 · 63 comments
Open

All the Maths #1

mikeal opened this issue Aug 8, 2014 · 63 comments

Comments

@mikeal
Copy link
Member

mikeal commented Aug 8, 2014

From a few conversations with @mikolalysenko I learned that languages like R and libraries like scipy which are known for amazing math are just binding to the same old Fortran libraries (and in a few cases C). That said, I don't think a binding layer will work for JavaScript, especially if we want to use them in the browser, but I do think we could use emscripten to transpile them in to JS, hopefully asm.js.

I got as far as writing most of a script to pull down calgo, which is laid out incredibly horribly, extract them all and generate package.json files which we could eventually checkin to git.

My attempts at getting emscripten to work ended at getting dragonegg to find the right llvm :(

Anyway, my instinct is that the best strategy is to:

  • Pull down all these ancient libraries in to a repo and check them in to git (they aren't in any source control as is).
  • Write scripts which can automate creating a package.json and individual npm package for each algorithm.
  • Write benchmarks to get the VM engineers competing for performance, hopefully ending in superior performance to the original Fortran libraries.

Thoughts?

I know that @groundwater is interested in this as I imagine @substack and @maxogden are as well. Additionally, I saw @rwaldron throw out some links on twitter which I think were related to crazy math in JS.

@mikeal
Copy link
Member Author

mikeal commented Aug 8, 2014

Oh, forgot to link to the other two big algorithm collections: lapack and arpack.

@bnoordhuis
Copy link
Member

I'm unclear about the goal. Is it to provide BLAS and/or LAPACK/LINPACK bindings for people to use in their applications? Encourage the V8 people to work on numerical performance? Both?

There's a strawman for native SIMD support and I believe Intel is committed to bringing that to V8.

(Spidermonkey nightly already implements the strawman but I don't think it has made its way into a release yet. Or if it did, it's probably behind a flag.)

There's also River Trail (strawman), another Intel effort that aims to bring parallel numerical computing to JS.

Mozilla calls it PJS and I believe they have declared an intent to implement it but I don't know what the status is. This meta-bug is the tracking bug, if you're interested.

@mikeal
Copy link
Member Author

mikeal commented Aug 11, 2014

@bnoordhuis I think all the enhancements that Mozilla and Intel have been working on will be great and I'm looking forward to them but R and scipy don't have them and are seeing adoption in one area that JS isn't at the moment. The only they have but we don't is access to these ancient libraries. If we were able to transpile them we might see some of that adoption, if they are slow the VM engineers will likely make them faster or work a little faster getting these other features in and improving emscripten or these libraries directly to use them.

@mikolalysenko
Copy link

The obstacles to getting high performance numerical code working in JS are relatively clear:

  1. Lack of SIMD data types, which prevents using high performance CPU features
  2. Lack of value types or pass-by-value semantics, which severely restricts optimizations based on cache and memory bandwidth considerations
  3. Limited multithreading and shared memory support, which restricts the effective memory bandwidth available in JS
  4. No viable pathway for either porting or linking to legacy code bases

The first three of these issues are really language design problems, and not things that we are going to solve at the node level. It is up to the ES committee to get these real performance impacting issues fixed, and the best we can do is put pressure on them to make it a priority over syntactic twiddling.

Still even if the ES committee never sorts these problems out, I would like to point out that at least in regards to the first 3 points JS is certainly no worse than Python, which suffers from exactly the same draw backs. And yet, people use Python extensively for numerical computing, even though an equally strong case could be made that the semantics of the language do not make it suitable for the task.

Accepting the first 3 items as a given, we are left with what I believe is the real problem here:

The main blocking issue to getting numerical computing working in JavaScript is interoperability with C and Fortran code bases.

Interoperability is really a broader problem that affects many applications (for example crypto could benefit a lot here as well), but it is felt in numerical computing especially acutely due to the size, sophistication and fundamental importance of legacy numerical code.

In my dream of how things could work, I would love to see a more modular/npm-centric solution to this problem. Something where we could just npm install lapack and then require it and call things like DGESVD directly -- without having to install or set up a bunch of external native code, using apt or whatever. The resulting bundles ought to work both in node.js and in the browser, and while we are at it on native environments there could be a fallback to the original Fortran where possible.

@trevnorris
Copy link

I think ruling out making native bindings to existing libraries sells Node short of possibilities. Especially in terms of performance. TBH I don't care about making this browser compatible, because even with all the proposed optimizations (which will take a while to come out) it'll still be much slower than using native methods.

V8 has dome some great work to make calling into C++ from JS really cheap. Here's an example that just sums an array of uint32_t.

// run.js
var runMe = require('./build/Release/addon').runMe;
var smalloc = require('smalloc');
var stypes = smalloc.Types;
var ITER = 1e5;
var SIZE = 1024;
var b = smalloc.alloc(SIZE, {}, stypes.Uint32);

for (var i = 0; i < SIZE; i++)
  b[i] = i;

function simpleSum(arr, len) {
  var ret = 0;
  for (var i = 0; i < len; i++)
    ret += arr[i];
  return ret;
}

var t = process.hrtime();
for (var i = 0; i < ITER; i++) {
  //runMe(b);
  //simpleSum(b, SIZE);
}
t = process.hrtime(t);
console.log(((t[0] * 1e9 + t[1]) / ITER).toFixed(1) + ' ns/op');
#include <v8.h>
#include <node.h>

using v8::FunctionCallbackInfo;
using v8::Handle;
using v8::Local;
using v8::Object;
using v8::Value;

void RunMe(const FunctionCallbackInfo<Value>& args) {
  Local<Object> obj = args[0].As<Object>();
  uint32_t* arr =
      static_cast<uint32_t*>(obj->GetIndexedPropertiesExternalArrayData());
  size_t len = obj->GetIndexedPropertiesExternalArrayDataLength();
  uint32_t ret = 0;

  for (size_t i = 0; i < len; i++)
    ret += arr[i];

  args.GetReturnValue().Set(ret);
}

void init(Handle<Object> exports) {
  NODE_SET_METHOD(exports, "runMe", RunMe);
}

NODE_MODULE(addon, init)

So, simpleSum() runs in 851 ns while runMe() runs in 90 ns.

Now, the math libraries themselves should be able to be done without much build pain and, TBH, if computationally complex operations are being done on my server I'll care way more about taking a minute to build a library and save the CPU cycles.

@AndreasMadsen
Copy link

As a data scientist in learning (who use Python for that), with a huge love for JavaScript, I can name three reasons why I doubt I will ever get to use JavaScript for actual numerical computing.

  • It is not possible to create a Uint8Array or Buffer with more than 1024^3 elements. If you think that is a huge number, just imagine that you have 1024 picture with a 1024 x 1024 resolutions. It is really not that many pictures and not that high a resolution.
  • Even if that is solved, there is still the 64bit integer problem. After 4 * 1024^3 elements, accessing items beyond that index becomes unreliable. I have seen some suggestions on how to implement 64bit integers, but they all seams too obscure to use for simple indexing.
  • There is no operator overloading, for people to be interested in numerical computing there has to be someone to use it. Those are typically people with a more mathematical background and expect A * B to involve a matrix multiplication. You can implement it with objects and function, but it quickly becomes quite ugly to look at. mul(mul(inv(mul(t(X), X)), t(X)), y) (not numerically stable solution to OLS). As you see the distance between the arguments and the operator quickly becomes to long as opposed to np.inv(X.T * X) * X.T * y. I have seen some suggest methods, but that is not extendable and the I think the mathematical syntax is lost, making operation ordering difficult to keep track of. X.T.mul(X).inv().mul(X.T).mul(y).

@trevnorris
Copy link

@AndreasMadsen

It is not possible to create a Uint8Array or Buffer with more than 1024^3 elements.

After 4 * 1024^3 elements, accessing items beyond that index becomes unreliable.

For technical clarity, it's actually 1024^3 - 1 bytes. That aside, it is easily possible to attach more than that to an object but not have it accessible by array index. Slightly more complicated, it is possible to access up to 2^52 bytes by numeric index but at a performance hit.

There is no operator overloading

True, and this could be a challenge. Technically someone could create their own parser and instead of operating in native JS operations they instead pass in a string. Ugly, yes, but it could get the job done.

One reason why I implemented the smalloc module for v0.12 was for scientific computing. It's easy to integrate into a little C++ and call existing C/C++ math libraries.

@Qard
Copy link
Member

Qard commented Oct 9, 2014

I like the idea of extending the Buffer concept to implement a BigNum type, which could be further extended to implement things like Vector and Matrix types. If I recall correctly, it might even be possible to override operators at the C++ level to have the prettier syntax for working with those more complex types.

@trevnorris
Copy link

@Qard

it might even be possible to override operators at the C++ level

Do you mean JS operators?

@Qard
Copy link
Member

Qard commented Oct 9, 2014

Yeah, I thought I remembered seeing functions to define operator behaviour
at C++ level. I could be wrong. I've never tried it myself.

@trevnorris
Copy link

@Qard Not that I remember. @bnoordhuis You know of some such black magic?

@bnoordhuis
Copy link
Member

I don't think there's any such magic. About the only thing you can do in C++ that you can't do from JS is make objects callable as functions (like how RegExp used to be.)

@Qard
Copy link
Member

Qard commented Oct 9, 2014

Indeed. You appear to be right. After some quick grepping of the source, I'm finding no such convenience. I must've been remembering some other VM. :(

@domenic
Copy link

domenic commented Oct 10, 2014

It is not possible to create a Uint8Array or Buffer with more than 1024^3 elements. If you think that is a huge number, just imagine that you have 1024 picture with a 1024 x 1024 resolutions. It is really not that many pictures and not that high a resolution.

In ES6 it is possible to create a typed array with up to 2^53 - 1 elements. If this is not implemented in V8 (or perhaps, not implemented in the version of V8 that ships with Node) than it is worth filing a bug to make sure it gets fixed.

Even if that is solved, there is still the 64bit integer problem. After 4 * 1024^3 elements, accessing items beyond that index becomes unreliable.

You should be able to access up to 2^53 - 1. Not quite 2^64 - 1, but certainly a lot more than 4 * 1024^3.

There is no operator overloading, for people to be interested in numerical computing there has to be someone to use it.

This is where things get tougher, and require tricky ESfuture-track work. The current thought on all things in this design space is found in https://github.com/nikomatsakis/typed-objects-explainer/; operator overloading is the least-developed of these proposals, unfortunately. But the good news, at least, is that everyone on TC39 agrees this must be done. It's just a matter of finding the right people and time to design an operator overloading extension to the JavaScript language, and making sure it integrates with all the related topics like value types and typed objects and user-defined literals.

@domenic
Copy link

domenic commented Oct 10, 2014

Lack of SIMD data types, which prevents using high performance CPU features

This is an accepted ESfuture-track proposal with good detail already, shipping in SpiderMonkey and with a patch open for V8 by Intel.

Lack of value types or pass-by-value semantics, which severely restricts optimizations based on cache and memory bandwidth considerations

(See above about value types)

Limited multithreading and shared memory support, which restricts the effective memory bandwidth available in JS

This gets tricky. We'd really like to avoid shared-memory multithreading and that whole concurrency model, with data races etc. But there is definitely room for improving the existing web worker and transferable infrastructure into something with more room for performance. I don't remember if this is exactly right, but I think at one point I was convinced you could allow a typed array (or similar) to be shared among multiple workers as long as only one of them had write access.

No viable pathway for either porting or linking to legacy code bases

Emscripten seems perfect for this?

@mikeal
Copy link
Member Author

mikeal commented Oct 10, 2014

Emscripten seems perfect for this?

I tried so hard to get emscripten running to port ancient FORTRAN code but failed to get it running on my Mac :(

@MylesBorins
Copy link

@mikeal I'd love to try hacking on this at oakland JS one week.

Something to do proper vector math with overloaded operators would be swell. Perhaps a browserify transform can take care of the sugar for that.

@mikeal
Copy link
Member Author

mikeal commented Oct 26, 2014

@thealphanerd that would be awesome :)

@mhart
Copy link

mhart commented Oct 30, 2014

Germane for the SIMD point: https://hacks.mozilla.org/2014/10/introducing-simd-js/

@smikes
Copy link

smikes commented Nov 15, 2014

Also interested in this.

I would like to explore several alternatives, including

  1. Fortran -> LLVM -> emscripten
  2. a direct F90 -> javascript (+SIMD +OPENCL) converter/interpreter
  3. and (most boring but probably easiest) non-browser support via pre-packaged binaries and C api

Also been meaning to explore JuPyTer (aka IPython notebooks) and generally making JS more sci/math friendly.

@mikeal
Copy link
Member Author

mikeal commented Nov 15, 2014

@smikes great ideas Sam but I'm curious why you think it would be easiest to do C bindings. I'm told emscripten actually can convert the Fortran code but I don't know of anyone that has written a v8 binding to a fortran library :)

@smikes
Copy link

smikes commented Nov 15, 2014

Well for starters LAPACK already ships a set of C headers : http://www.netlib.org/lapack/#_standard_c_language_apis_for_lapack

And ARPACK has a C++ api, http://www.ime.unicamp.br/~chico/arpack++/

And when I was writing quantum chemistry codes (which was very long ago) we generally found it easy to call a Fortran function from C. So if the fortran library can be compiled to a native binary, then we can write C/C++ code that bridges JS -> C++ -> F90

@gaborcsardi
Copy link

I think you can forget about ARPACK, it is not important, e.g. R did not have it either until very recently, and even now only as a 3rd party package. ARPACK is also outdated and unmaintained.

LAPACK is definitely needed, however, but has C versions as well (translated from Fortran).

@mikeal
Copy link
Member Author

mikeal commented Nov 15, 2014

Does anyone know why R uses the Fortran LAPACK instead of the C version? I can't imagine it was easier for them to bind to.

@gaborcsardi
Copy link

Tips:

  • C LAPACK is kind of hacky, it is created via f2c, which is not actively maintained. You also need to postprocess the generated C source.
  • Not all LAPACK versions are translated to C AFAIK, so they would need to do this for themselves. Not hard, but also not very robust.
  • LAPACK comes with header files for C, so once compiled, it is easy to call it from C.
  • They want to be able to link to various LAPACK implementations, e.g. MKL, and these are in Fortran, anyway.
  • R is compiled by GCC/gfortran or clang + gfortran on all supported platforms (well, maybe Sun's compilers are supported, but who cares about Solaris, anyway). In particular, on windows they use MinGW, not Visual Studio. So having to compile Fortran code is not a problem. (VS does not have a Fortran compiler.)

In the library I develop, we use (part of) LAPACK, and on Windows we use the f2c translated (and postprocessed) sources, for ARPACK as well, actually. This is to allow people build with Visual Studio.

@gaborcsardi
Copy link

I just mentioned the C LAPACK, because it is a way, if emscripten or your tool of choice does not support Fortran.

@smikes
Copy link

smikes commented Nov 16, 2014

I am working on building a docker image that will contain

  • gcc (includes gfortran)
  • llvm
  • dragonegg
  • emscripten

My idea is that if we have a known good Fortran -> JS toolchain installed somewhere, then we can clone it and experiment with it. Docker is suitable for this because we don't care about native code when the final compilation target is (inherently cross-platform) javascript.

@smikes
Copy link

smikes commented Nov 16, 2014

Started a repository for the Docker image project - https://github.com/smikes/femscripten -- Dockerfile checked in and some sample code.

Docker image upload in progress as of now (Sun 16 Nov 2014 17:38:38 UTC) to docker https://registry.hub.docker.com/u/smikes/femscripten/

@smikes
Copy link

smikes commented Nov 16, 2014

The docker image for FORTRAN->JS is now working and I have tested with a subroutine from LAPACK -- a very simple one, to be sure, but in principle it should work. Come over to https://github.com/smikes/femscripten to check it out.

Docker images are not yet available for download but you can build your own with the Dockerfile provided in the github repo.

@Albert-IV
Copy link
Member

@smikes That's awesome!

@smikes
Copy link

smikes commented Nov 19, 2014

Thanks! I was not aware of that. That project even wound up with R-in-the-browser running on an iPad (see picture).

That was a proof-of-concept; the input interface was via dialog boxes and there was (afaict) no support for graphical output. I understand R already has some JS-generating output tools (eg, canvas - http://rforge.net/canvas/ ) so a moderately useful r-in-browser should ---

  1. support code editing
  2. have an interaction console with editing/history
  3. generate and display graphics

Something a lot like an IPython notebook, but fully browser-based (or is it Jupyter now?).

@gaborcsardi
Copy link

As I see there are even fundamental problems with it. The most important one is no dlopen() support, so no R packages can be loaded. Almost everything is in packages, statistics, LAPACK is also loaded dynamically, regular expressions, graphics devices, plotting, etc.

The things you wrote are kind of easy after that. The graphics display is maybe more challenging, but R has SVG output, and graphics device can be simply implemented (kind of) as plugins, by implementing very few graphics primitives, so it should not be hard.

@gaborcsardi
Copy link

Just to comment on the main thread as well, a lot numerical libraries are already available in JS. E.g. some links:
https://www.npmjs.org/package/lapack
https://www.npmjs.org/package/gsl
https://www.npmjs.org/package/armadillo
https://www.npmjs.org/package/eigenjs
https://www.npmjs.org/package/rstats

Some of these are partially implemented, and non-native JS, but require C/C++ code I guess. I also don't know if they are robust, or even just usable.

@smikes
Copy link

smikes commented Nov 19, 2014

The only one of those I've checked out is lapack, which is incomplete.

It implements a foreign-function interface to a handful of functions:

 sgeqrf_ dgeqrf_ sorgqr_ sgesvd_ sgetrf_ dgetrf_ sgesv_

But it's true, there's a lot of sci/math javascript out there that's not well publicized. (For example, there's a pure-JS in-browser 3d molecule viewer widget.)

Maybe the node-forward/discussions wiki would be a good place to keep track of these? I can't commit to maintaining that list, though, so I'm not going to start it. ;-)

@mikolalysenko
Copy link

Whoa! Just checked back in here and awesome progress @smikes !!!

I think the next big step is to get the emscripten output into something a bit more manageable/modular. (Hopefully so it is possible to load up multiple libraries at the same time, installed via CommonJS).

In an ideal world, these libraries would be installable via npm and expose a CommonJS interface. By default in a node environment they would expose a wrapper over the native lapack for your system, but if it is not present or linking fails; then they would have a pure JS fallback (which would also work in a browser).

How large are the resulting code objects from femscripten out of curiosity?

@smikes
Copy link

smikes commented Nov 19, 2014

I think the next big step is to get the emscripten output into something a bit more manageable/modular

I agree that this would be awesome. We are still pretty far from that, though.

How large are the resulting code objects from femscripten out of curiosity?

Huge, enormous -- 350k for unoptimized, unminified, down to 150k for -O3 and minified.

root@ab100f610d45:/mnt/test/examples/lapack# ls -lSa lapack*js
-rw-r--r-- 1 1000 staff 363518 Nov 19 08:58 lapack-unoptimized.js
-rw-r--r-- 1 1000 staff 212924 Nov 19  2014 lapack-unoptimized-min.js
-rw-r--r-- 1 1000 staff 359819 Nov 19 08:58 lapack-O1.js
-rw-r--r-- 1 1000 staff 210265 Nov 19  2014 lapack-O1-min.js
-rw-r--r-- 1 1000 staff 160275 Nov 19 08:58 lapack-O2.js
-rw-r--r-- 1 1000 staff 159219 Nov 19  2014 lapack-O2-min.js
-rw-r--r-- 1 1000 staff 159085 Nov 19 08:59 lapack-O3.js
-rw-r--r-- 1 1000 staff 158035 Nov 19  2014 lapack-O3-min.js

However, most of that data is emscripten overhead for setting up the virtual processor and memory and all that. The actual idamax function is a lot smaller (extracted from 'unoptimized') -- 5k unminified, <4k minified. ( https://gist.github.com/smikes/a016acb988e2832fa2b6 )

But even that's not an algorithm suitable for calling from plain javascript; it still presumes the existence of the emscripten virtual machine.

What I'd like to do is get a higher-level interpretation of the fortran source -- a representation of the AST perhaps -- and try generating javascript from that. Even the LLVM bitcodes are too low level. (And I'd prefer not to have do the translation manually.)

gfortran provides the option -fdump-fortran-original to dump a parse tree of the function. I will see if I can get anywhere with that. ;-)

@mikeal
Copy link
Member Author

mikeal commented Nov 20, 2014

150K isn't that big, we're just more sensitive to this kind of thing in the JS community :)

@darrenderidder
Copy link

Great initiative everyone, interested to see where this goes. Just my .02 but a quick look shows that lapack builds produce static libs, so I'm not clear what the advantages of a fortran -> c -> emscripten transpile strategy would be. I'd have assumed userland modules wrapping lapacks native C api would be the way to integrate it. Because JS is constrained to IEEE 754 double precision floating point numbers with well-known implications, I'd also assume some number type abstraction (BigDecimal.js etc) would be needed, with functions replacing the typical arithmetic operators.

@smikes
Copy link

smikes commented Nov 21, 2014

@darrenderidder

There are several related but ultimately different goals which I'd like to support:

Making FORTRAN libraries available to node programs on the server

This is actually my area of greatest interest. Right now one thing that's painful with node is having the appropriate toolchain installed to build binary apps (python, gyp, appropriate compiler).

Rather than adding to that for FORTRAN codes, I'd like to see if I can set up a docker image that provides a FORTRAN-to-native cross-compiler, and make that available as a node/docker package. It seems kind of crazy that it would be simpler to provide people with a whole virtual machine than to get them to install a FORTRAN compiler, but I guess that's life in the 21st century. ;-)

Running formerly FORTRAN-only or otherwise native-only code in the browser

This is where emscripten is useful; building R or Octave or things like that and running them in the browser.

Making javascript a friendly environment for new math/science code development

I would like to see LAPACK algorithms translated into idiomatic javascript, ideally without a lot of manual intervention. I would like to make it easy to take advantage of OpenCL and SIMD efforts.

The lack of operator overloading and intrinsic bignum, bigdecimal, and complex types is a problem here. Maybe a macro system such as sugar.js can be used to help with this

@tkelman
Copy link

tkelman commented Dec 3, 2014

Making FORTRAN libraries available to node programs on the server

Write FFI bindings. If you're using gfortran, or Intel's compiler on *nix, then it's fairly simple to call a Fortran library through the C ABI.

Running formerly FORTRAN-only or otherwise native-only code in the browser

Doing Fortran-to-Emscripten translation with Dragonegg is risky from a maintenance perspective. While you may be able to get things to work with gcc 4.6 and llvm 3.3 (the next thing you'll need to do for non-trivial Fortran code is Emscripten-compile the libgfortran runtime library), it's unlikely that this will continue to work in the future for newer versions of gcc or llvm. See http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-July/074784.html for example, Dragonegg hasn't been a very high priority for LLVM developers since clang matured enough to stand on its own. [edit: Dragonegg is now officially unmaintained and dead http://reviews.llvm.org/rL236077]

I personally don't see the point of trying to do numerically intensive scientific computing in the browser when things like http://cloud.sagemath.com, http://tmpnb.org, and www.juliabox.org exist. And just so everyone's clear, BLAS and LAPACK are primarily API specifications - the reference implementations from Netlib are rarely used for serious work. Anyone who is remotely performance sensitive uses an optimized implementation such as OpenBLAS or Intel's MKL, which use quite a bit of CPU-family-dependent SIMD assembly, cache size tuning, etc.

Making javascript a friendly environment for new math/science code development

There are a few people who disagree with me and are working on this, you might want to have a look at https://github.com/amd/furious.js and possibly contact the author for his input here.

As I see it there are language problems which you're going to have a hard time changing any time soon, and a lack of appropriate libraries which takes a lot of effort from people who really know what they're doing to overcome. I do a lot of scientific computing and numerically intensive work, and Atwood's law notwithstanding I don't meet a lot of people in the field who ever say "I wish I could write this in javascript."

Javascript (or specifically asm.js) as a compilation target for the browser is interesting though, regardless of the maintenance difficulties in getting and keeping an LLVM backend for Fortran working robustly. There are a few people in the Julia community who've looked at using Emscripten as a backend target for Julia code, I'd have to go searching to check the latest status on that [edit: https://github.com/JuliaLang/julia/issues/9430]. And others have looked at embedding Julia in Node for server-side use to make it easier to call one language from the other. I think we have some more work to get ourselves using the same version of libuv (instead of our fork) for this to work more smoothly.

@smikes
Copy link

smikes commented Dec 3, 2014

Moving from the general case to the specific, there are some things currently written in C++ and FORTRAN that I would like to try running in the browser. It may be that a browser-only implementation is no better than round-tripping to the server, but that's a question that I can only answer by messing around.

Specifically the codes that I want to run include

  1. inchi, a C / C++ library for converting a molecule representation between a connectivity graph to a unique name
  2. openMM, a C++ toolkit for molecular dynamics ( https://github.com/pandegroup/openmm )
  3. MMFF, a standard molecular dynamics force field, written in FORTRAN ( https://en.wikipedia.org/wiki/Merck_Molecular_Force_Field )

@tkelman
Copy link

tkelman commented Dec 3, 2014

C and C++ you should be fine with clang to emscripten. If the Fortran code you're interested in is all Fortran 77 and self-contained, you can try f2c instead of gfortran and dragonegg, might be simpler. That won't get you to "on par with R and scipy," but it might work for your specific use case.

@stephenhandley
Copy link

on the topic of operator overloading in js http://www.2ality.com/2011/12/fake-operator-overloading.html

@eush77
Copy link

eush77 commented Dec 9, 2014

IMO you don't need operator overloading in JavaScript. It should be in a different language. Consider a subset of MatLab which could compile to JavaScript and have first-class matrix operations, slices or whatever.

Operator overloading is rarely useful for regular use cases, and bringing more complexity to a single language because of some edge cases doesn't make any sense.

@madbence
Copy link

madbence commented Dec 9, 2014

I think the language needs infix notation, not operator overloading.

a + b // the + symbol means addition... or maybe something completly different?
a `add` b // equals to add(a, b)

@derekm
Copy link

derekm commented Dec 11, 2014

@mikeal said:

but R and scipy don't have them and are seeing adoption in one area that JS isn't at the moment

R does generate SSE/SIMD instructions depending on the implementation of specific libraries or routines. But, primarily, R achieves parallel vectorization by means of CPU multiprocessing, and does so inherently any time R's native array processing syntax is used. Python also has a threading model that nicely supports non-I/O bound numerical calculations.

JS will need a decent threading model to support parallel vectorization at high- and low-levels.

Maybe some WebWorker + messaging-for-shared-state high-level abstractions could be provided for acheiving the heavy lifting of easy parallelism that is necessary for numerical computation.

@gaborcsardi
Copy link

@derekm

R does generate SSE/SIMD instructions depending on the implementation of specific libraries or routines. But, primarily, R achieves parallel vectorization by means of CPU multiprocessing, and does so inherently any time R's native array processing syntax is used.

I am not sure what exactly you mean here, but to me all of R's array processing seems very sequential. See e.g. double binary operators here: https://github.com/wch/r-source/blob/170a2a4a030e84a2437aee1a8d15c310a3b22527/src/main/arithmetic.c#L904 and the macros here: https://github.com/wch/r-source/blob/fbf5cdf29d923395b537a9893f46af1aa75e38f3/src/include/R_ext/Itermacros.h#L32

Am I missing something? How is this inherently parallel?

@tkelman
Copy link

tkelman commented Dec 12, 2014

@eush77

Operator overloading is rarely useful for regular use cases, and bringing more complexity to a single language because of some edge cases doesn't make any sense.

Overloading is very useful for numerical work. It's common to want to define a custom type that behaves like a number syntactically but with special behavior - arbitrary-precision arithmetic, polynomials, dual numbers for automatic differentiation, probability distributions, placeholder variables for modeling optimization problems, etc. But it's not very pleasant to do when overloading is tacked-on to a language that is otherwise not designed for it. Using JavaScript as a compilation target for a better-suited language seems like a better approach when it comes to numerical computing in the browser. It would likely save some work to pick an LLVM-based language to start from - Julia, Rust, or maybe even Swift would be reasonable choices. I've heard GHCJS works very well which would be another option, somewhat lacking in the array libraries or ease-of-use categories though.

@derekm

Python also has a threading model that nicely supports non-I/O bound numerical calculations.

Huh? Python's threading model for numerical calculations is "do threading (and all of the actual work) in C."

@gaborcsardi

all of R's array processing seems very sequential

Any time R calls out to BLAS, you get multithreading (and SIMD) by using OpenBLAS or MKL. You can run into issues with excessive fork-join, but I understand packages like plyr are designed to decompose problems at a higher level to mitigate this.

@gaborcsardi
Copy link

@tkelman:

Any time R calls out to BLAS, you get multithreading (and SIMD) by using OpenBLAS or MKL.

Right. But R does not use BLAS for array operations. AFAIK, these are the only operations it uses BLAS/LAPACK for: https://github.com/wch/r-source/blob/d75f39d532819ccc8251f93b8ab10d5b83aac89a/src/main/names.c#L926 (the do_lapack stuff). This is SVD, a linear solver, plus some decompositions, and nothing else.

but I understand packages like plyr are designed to decompose problems at a higher level to mitigate this.

Yes, some packages work around sequential R, but you need to do extra work to use them. It's not that you just load the package and then array operations are parallel.

IMO there is nothing inherently parallel in R or its packages. (One could actually override the default array operations to make them SIMD parallel, but I don't know of any package that does this. It is too much work, I guess.)

@tkelman
Copy link

tkelman commented Dec 12, 2014

But R does not use BLAS for array operations. AFAIK, these are the only operations it uses BLAS/LAPACK for

Ah, good to know, you know much more about the internals of R than I do. I'm a little surprised to not see gemv or gemm (matrix-vector or matrix-matrix multiplication, for those not familiar with the Fortran acronyms http://www.netlib.org/blas/blasqr.pdf) there.

One could actually override the default array operations to make them SIMD parallel, but I don't know of any package that does this.

Some packages use Rcpp to write extensions in C++ where you can do whatever you want in your own code, right? But maybe it's considered bad form to have your R package substantially change default behaviors.

@gaborcsardi
Copy link

Ah, good to know, you know much more about the internals of R than I do. I'm a little surprised to not see gemv or gemm (matrix-vector or matrix-matrix multiplication,

You are actually right, the above ones are only the ones called directly from R code. I missed a couple more, they are here:

~/works/r-source/src/main (trunk)$ grep F77_CALL *
array.c:        F77_CALL(dgemm)(transa, transb, &nrx, &ncy, &ncx, &one,
array.c:    F77_CALL(zgemm)(transa, transb, &nrx, &ncy, &ncx, &one,
array.c:    F77_CALL(dsyrk)(uplo, trans, &nc, &nr, &one, x, &nr, &zero, z, &nc);
array.c:    F77_CALL(dgemm)(transa, transb, &ncx, &ncy, &nrx, &one,
array.c:    F77_CALL(zgemm)(transa, transb, &ncx, &ncy, &nrx, &one,
array.c:    F77_CALL(dsyrk)(uplo, trans, &nr, &nc, &one, x, &nr, &zero, z, &nr);
array.c:    F77_CALL(dgemm)(transa, transb, &nrx, &nry, &ncx, &one,
array.c:    F77_CALL(zgemm)(transa, transb, &nrx, &nry, &ncx, &one,
array.c:    F77_CALL(dtrsm)("L", upper ? "U" : "L", trans ? "T" : "N", "N",
main.c:    F77_CALL(intpr)("dummy", &i, &i, &i);

So there is indeed gemm as least (if your matrices do not contain NA). Btw. as most of BLAS/LAPACK is included in R, you can call BLAS/LAPACK from packages as well, and some packages do that, e.g. there are calls from the base stats package, but this again just a handful of functions.

So the situation is slightly better than what I said above, but not much, and array operations are certainly sequential.

Some packages use Rcpp to write extensions in C++ where you can do whatever you want in your own code, right?

Right. With some limits. (Btw. you don't need Rcpp for C/C++ code in R package, base R has an API for it. Rcpp just makes it easier.)

@derekm
Copy link

derekm commented Dec 12, 2014

Okay... I only recently began prototyping stuff in R, it is hard not
to notice the differences between vector operations and for loops, either
in the literature or in your own code.

The literature is a little misleading claiming things in R having
vectorizations or being vector operations or being a vectorized array
processing language. When they say vectorization they must mean in syntax
only and not in operation or processing, their claims having nothing to do
with vector processors.

Performance differences must be solely interpreted R on for-loop syntax vs.
compiled C loops on so-called "vector" syntax.

On Thursday, December 11, 2014, Gabor Csardi notifications@github.com
wrote:

Ah, good to know, you know much more about the internals of R than I do.
I'm a little surprised to not see gemv or gemm (matrix-vector or
matrix-matrix multiplication,

You are actually right, the above ones are only the ones called directly
from R code. I missed a couple more, they are here:

~/works/r-source/src/main (trunk)$ grep F77_CALL *
array.c: F77_CALL(dgemm)(transa, transb, &nrx, &ncy, &ncx, &one,
array.c: F77_CALL(zgemm)(transa, transb, &nrx, &ncy, &ncx, &one,
array.c: F77_CALL(dsyrk)(uplo, trans, &nc, &nr, &one, x, &nr, &zero, z, &nc);
array.c: F77_CALL(dgemm)(transa, transb, &ncx, &ncy, &nrx, &one,
array.c: F77_CALL(zgemm)(transa, transb, &ncx, &ncy, &nrx, &one,
array.c: F77_CALL(dsyrk)(uplo, trans, &nr, &nc, &one, x, &nr, &zero, z, &nr);
array.c: F77_CALL(dgemm)(transa, transb, &nrx, &nry, &ncx, &one,
array.c: F77_CALL(zgemm)(transa, transb, &nrx, &nry, &ncx, &one,
array.c: F77_CALL(dtrsm)("L", upper ? "U" : "L", trans ? "T" : "N", "N",
main.c: F77_CALL(intpr)("dummy", &i, &i, &i);

So there is indeed gemm as least (if your matrices do not contain NA).
Btw. as most of BLAS/LAPACK is included in R, you can call BLAS/LAPACK from
packages as well, and some packages do that, e.g. there are calls from the
base stats package, but this again just a handful of functions.

So the situation is slightly better than what I said above, but not much,
and array operations are certainly sequential.

Some packages use Rcpp to write extensions in C++ where you can do
whatever you want in your own code, right?

Right. With some limits. (Btw. you don't need Rcpp for C/C++ code in R
package, base R has an API for it. Rcpp just makes it easier.)


Reply to this email directly or view it on GitHub
#1 (comment)
.

@gaborcsardi
Copy link

Performance differences must be solely interpreted R on for-loop syntax vs.
compiled C loops on so-called "vector" syntax.

I believe so. Plus there is the potential, that they will be parallel some time in the future.

@Planeshifter
Copy link

Great to see people working to make JavaScript better for numerical computing!

I do not see any compelling argument why R, a language which I am quite familiar with as an aspiring statistician, would be better suited for numerical work than JavaScript. However, the biggest disadvantage of JS at the moment is the lack of appropriate, well maintained libraries.

For example, it would be great to have a well-implemented linear-algebra package. I do not know of any such package currently. There are a couple node modules binding to C or C++ libraries (such as the mentioned eigenjs or armadillo, which I have been working on), but they are all unfinished. And the pure JS implementations often lack a lot of functionality, e.g. matrix decompositions.

Concerning operator-overloading, one could use sweet.js macros to implement it, at the cost of having to compile one's code.

Besides for node.js, it would be nice to have access to such libraries in the brower, too. For example, I recently wanted to create a d3 visualization plotting some simulation of logistic regression results. The ability to easily carry out the model fitting inside the browser would have been of great help in this case.

So porting some of the ancient math libraries to JS via emscripten sounds like a good idea to me.
So far, I played around a bit with it and tried to port parts of the GNU Scientific Library (GSL) to JS via emscripten. While the process is trivial for functions accepting or returning standard types such as floats or doubles, the documentation is lacking in how to properly convert functions accepting or returning custom C structs (the Embind and WebIDL facilities are only tailored at C++ code).

For example, consider the following artificial example:

#include <math.h>

struct Vector3d{
  double x;
  double y;
  double z;
};

double length(struct Vector3d vec){
  double vec_sum = vec.x*vec.x + vec.y*vec.y + vec.z*vec.z;
  return sqrt(vec_sum);
}

struct Vector3d createVec(double x, double y, double z){
  struct Vector3d ret;
  ret.x = x;
  ret.y = y;
  ret.z = z;
  return ret;
}

I am very new to emscripten, so I have no idea how I could port such code as to make it callable from JS, in the sense that I want to end up with a constructor function than can create 3d vectors and a length function to calculate their length.

Has anyone figured out how to do this?

@smikes
Copy link

smikes commented Dec 15, 2014

From the very small amount of time I've spent playing around with emscripten, it seems that it will compile a complete program + runtime environment into a javascript "program" that can be run in a browser. It does not have a facility for separately compiling one C/C++ function idiomatically into javascript.

One of my goals with femscripten (see above) is to make it possible (in the special case of reasonably pure functions) to produce javascript functions from C/C++/FORTRAN source, without having to link in the entire runtime+memory model that emscripten uses in order to emulate the complete program environment.

@Planeshifter
Copy link

The nice thing is that emscripten does provide facilities to expose functions directly to JS. I created a git repo with the gsl code and my shell script which exports a large number of functions for use in JavaScript. Here is the link:
https://github.com/Planeshifter/gsl-js
It is also immediately possible to require the created *.js file in node, as in var gsl = require("./output.js"). This will give you access to all the exported gsl functions listed in https://github.com/Planeshifter/gsl-js/blob/master/exported_functions

I have not investigated the performance hit incurred because of the overhead of having the emscripten runtime in the back, but the claim on their website is that it should be reasonably fast.

@smikes
Copy link

smikes commented Dec 15, 2014

Oh, awesome. I hadn't found that yet. I will have to try that with the FORTRAN frontend, it may save me a lot of work. Thanks for pointing it out.

@Planeshifter
Copy link

Quick update: I asked on the emscripten repo about how to deal with structs, and meanwhile figured out how to export the C example I posted above via Embind: emscripten-core/emscripten#3083. So I do not think that there are any fundamental barriers preventing us from exposing entire C libraries to JS via emscripten.

@Planeshifter
Copy link

I played around a bit and ended up transpiling all of the complex number functionality of the GSL library into a node package. The GitHub repo with some documentation is here: https://github.com/Planeshifter/gsl-complex-js. Installation via npm: npm install gsl-complex.
All functions are there, and using emscripten to export them ended up being straightforward (see the bottom of the complex.cpp file for the Embind bindings I had to create). I only had to create getter and setter functions for the real and imaginary part of a complex number in the original C code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests