Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migrate to class #252

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open

migrate to class #252

wants to merge 13 commits into from

Conversation

sirenkovladd
Copy link
Contributor

migrate to class syntax (Chain, request, response)

bun.js support
update usage of deprecated attribute

Checklist

lib/Chain.js Outdated Show resolved Hide resolved
lib/doInject.js Outdated Show resolved Hide resolved
@Uzlopak
Copy link
Contributor

Uzlopak commented Aug 31, 2023

filenames should be lowercased and kebabcased.

please provide a benchmark, so that we can compare potential performance gains or losses. We use afaik light-my-request for fastify.inject. Some people use inject in their productive solutions.

is there a reason, we can not use prototypical notation?

@sirenkovladd
Copy link
Contributor Author

Okey I will change filename, but now you also have configValidator.js and parseURL.js

@sirenkovladd
Copy link
Contributor Author

❯ npm run benchmark

> light-my-request@5.10.0 benchmark
> node test/benchmark.js

Request x 269,265 ops/sec ±3.62% (78 runs sampled)
Custom Request: 
Request With Cookies x 205,368 ops/sec ±2.22% (84 runs sampled)
Request With Cookies n payload x 169,358 ops/sec ±4.85% (77 runs sampled)
ParseUrl x 702,748 ops/sec ±9.31% (60 runs sampled)
ParseUrl and query x 99,315 ops/sec ±4.00% (75 runs sampled)
Fastest is: ParseUrl

❯ npm run benchmark       

> light-my-request@5.10.0 benchmark
> node test/benchmark.js

Request x 238,129 ops/sec ±5.85% (71 runs sampled)
Custom Request: 
Request With Cookies x 169,174 ops/sec ±4.09% (78 runs sampled)
Request With Cookies n payload x 189,661 ops/sec ±2.57% (82 runs sampled)
ParseUrl x 1,263,631 ops/sec ±1.82% (89 runs sampled)
ParseUrl and query x 99,157 ops/sec ±3.26% (81 runs sampled)
Fastest is: ParseUrl

@Uzlopak
Copy link
Contributor

Uzlopak commented Aug 31, 2023

Which of these benchmarks is default branch and which one is this PR?

@sirenkovladd
Copy link
Contributor Author

is there a reason, we can not use prototypical notation?

Bun.js implemented ServerResponse through a class that does not allow to call http.ServerResponse.call(this, ...)

@Uzlopak
Copy link
Contributor

Uzlopak commented Aug 31, 2023

Good argument. Rename the files and make them lower and kebab, please... ;)

@sirenkovladd
Copy link
Contributor Author

Which of these benchmarks is default branch and which one is this PR?

Both are PR

Below is the default branch

❯ npm run benchmark

> light-my-request@5.10.0 benchmark
> node test/benchmark.js

Request x 286,519 ops/sec ±2.13% (88 runs sampled)
Custom Request x 19,788 ops/sec ±2.32% (84 runs sampled)
Request With Cookies x 204,017 ops/sec ±2.81% (79 runs sampled)
Request With Cookies n payload x 200,765 ops/sec ±2.78% (79 runs sampled)
ParseUrl x 1,001,307 ops/sec ±4.34% (78 runs sampled)
ParseUrl and query x 115,094 ops/sec ±3.48% (83 runs sampled)
Fastest is: ParseUrl

@Uzlopak
Copy link
Contributor

Uzlopak commented Aug 31, 2023

It seems that customrequest benchmark is broken. Can you have a look at it?

date: res.headers.date,
connection: 'keep-alive',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are date and connection gone?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, you read these options from _header which is not documented in the ServerResponse
(also Bun not export this attribute)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we read header from .getHeader(name)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a breaking change. Probably would result in delayed approval of this PR. Can you implement it not backwards breaking?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think not because node did it in a hidden way and bun doesn't add these headers at all

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can wait, no problem)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be also result in a rejection of your PR, as bun is incompatible with nodejs despite @Jarred-Sumner 's claim to be a drop in replacement.

Please reconsider.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think I can fix these headers in Bun, but you are also using undocumented options, don't you think that is not quite the right way
I think users shouldnt read and interact with them at all

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are devs, we should always agree on a reasonable solution.

But for a informed decision, it is not enough to write "undocumented options". Can you please elaborate what options we are using, which are undocumented etc.?

But on the other hand, we have basically an api contract to fulfill. If you remove headers, we will break the expectations.

Thats why I wrote, that if you could implement it in a non-breaking way. Because we will have soon fastify v5, where we can introduce breaking changes. So we are in the perfect time window to discuss if those headers should be added or not. And if not, we can remove them in few months, when we release fastify v5.

But you should not be sad, if removing the additional headers is not accepted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with you
when I said "undocumented parameters" I meant "ServerResponse._header".
https://github.com/fastify/light-my-request/blob/v5.10.0/lib/response.js#L177C28-L177C35

https://nodejs.org/api/http.html#class-httpoutgoingmessage (no word about _header)

@sirenkovladd
Copy link
Contributor Author

PR

❯ npm run benchmark 

> light-my-request@5.10.0 benchmark
> node test/benchmark.js

Request x 237,109 ops/sec ±4.16% (75 runs sampled)
Custom Request x 14,722 ops/sec ±6.88% (76 runs sampled)
Request With Cookies x 169,585 ops/sec ±4.24% (79 runs sampled)
Request With Cookies n payload x 159,198 ops/sec ±4.37% (79 runs sampled)
ParseUrl x 1,109,862 ops/sec ±4.22% (80 runs sampled)
ParseUrl and query x 113,048 ops/sec ±3.30% (84 runs sampled)
Fastest is: ParseUrl

@github-actions
Copy link

Node: 16
master:

Request x 99,483 ops/sec ±0.68% (92 runs sampled)
Custom Request x 16,117 ops/sec ±1.18% (89 runs sampled)
Request With Cookies x 85,190 ops/sec ±0.50% (93 runs sampled)
Request With Cookies n payload x 78,458 ops/sec ±0.38% (91 runs sampled)
ParseUrl x 144,678 ops/sec ±0.28% (98 runs sampled)
ParseUrl and query x 113,094 ops/sec ±0.16% (97 runs sampled)
Fastest is: ParseUrl

master:

Request x 96,878 ops/sec ±0.61% (91 runs sampled)
Custom Request x 15,772 ops/sec ±1.52% (88 runs sampled)
Request With Cookies x 85,016 ops/sec ±0.45% (95 runs sampled)
Request With Cookies n payload x 79,598 ops/sec ±0.47% (91 runs sampled)
ParseUrl x 143,810 ops/sec ±0.25% (95 runs sampled)
ParseUrl and query x 109,757 ops/sec ±0.86% (95 runs sampled)
Fastest is: ParseUrl

Node: 18
master:

Request x 217,305 ops/sec ±1.82% (86 runs sampled)
Custom Request x 13,217 ops/sec ±2.60% (83 runs sampled)
Request With Cookies x 171,212 ops/sec ±1.29% (86 runs sampled)
Request With Cookies n payload x 155,925 ops/sec ±2.72% (79 runs sampled)
ParseUrl x 876,693 ops/sec ±1.39% (85 runs sampled)
ParseUrl and query x 88,747 ops/sec ±1.85% (81 runs sampled)
Fastest is: ParseUrl

master:

Request x 224,909 ops/sec ±2.56% (78 runs sampled)
Custom Request x 12,123 ops/sec ±2.13% (82 runs sampled)
Request With Cookies x 161,811 ops/sec ±2.18% (83 runs sampled)
Request With Cookies n payload x 150,949 ops/sec ±2.08% (83 runs sampled)
ParseUrl x 834,933 ops/sec ±1.32% (85 runs sampled)
ParseUrl and query x 91,471 ops/sec ±1.97% (84 runs sampled)
Fastest is: ParseUrl

Node: 20
master:

Request x 256,783 ops/sec ±1.25% (88 runs sampled)
Custom Request x 16,408 ops/sec ±1.69% (87 runs sampled)
Request With Cookies x 189,893 ops/sec ±0.75% (89 runs sampled)
Request With Cookies n payload x 178,208 ops/sec ±0.83% (90 runs sampled)
ParseUrl x 998,247 ops/sec ±0.89% (91 runs sampled)
ParseUrl and query x 95,531 ops/sec ±0.66% (92 runs sampled)
Fastest is: ParseUrl

master:

Request x 246,866 ops/sec ±1.39% (91 runs sampled)
Custom Request x 16,323 ops/sec ±1.49% (84 runs sampled)
Request With Cookies x 189,552 ops/sec ±0.81% (89 runs sampled)
Request With Cookies n payload x 171,669 ops/sec ±0.78% (91 runs sampled)
ParseUrl x 968,401 ops/sec ±0.56% (91 runs sampled)
ParseUrl and query x 94,373 ops/sec ±0.43% (88 runs sampled)
Fastest is: ParseUrl

@Jarred-Sumner
Copy link

I really appreciate the effort to make more libraries work in Bun but IMO, we should strongly default to fix things in Bun instead of libraries whenever possible. Its our fault when it doesn't work, not the library's fault. We can make some tweaks to continue to use classes in Bun internally without breaking libraries. I also think it's important for changes like this to have no negative performance impact and it's hard to tell from the benchmarks posted

@mcollina
Copy link
Member

I'm somewhat skeptical of this change. Why does Bun need classes? All of Node.js is implemented using prototypical inheritance anyway.

@Jarred-Sumner
Copy link

Why does Bun need classes?

  • we use class private properties to make it harder to leak internals to userland code. Symbols can work for this too but not as well
  • when classes are implemented in native code, the JS usage has to be constructed or brand checks wont work

@sirenkovladd
Copy link
Contributor Author

Why are you skeptical? Do you have any reason to use a prototype instead of extending a class? it's better for reading, and looks healthier
also as mentioned above, it allows the parent class to use private properties

@mcollina
Copy link
Member

@Jarred-Sumner I with we could do that for Node.js without breaking a good chunk of npm.

My 2 cents is that this is something you want to fix on the Bun side because this won't definitely be the only module with this problem.

@sirenkovladd
Copy link
Contributor Author

sirenkovladd commented Aug 31, 2023

Oh, of course
this is just one of the reasons for this PR but not the only one, i would just like to improve this library by not using deprecated syntax
here is an issue I created a day ago oven-sh/bun#4415

@mcollina
Copy link
Member

Prototypical inheritance has not been deprecated anywhere in the Node.js codebase.

@jsumners
Copy link
Member

Nor has it been "deprecated" from the language. It is the language. The class keyword is sugar.

@kibertoad
Copy link
Member

I would argue that classes are a syntax sugar for a reason, though, they are easier to read and modify than prototype-inheritance based code. If there is no perf degradation or a negligible one, I would be +1 on this change, as maintainability is important.

Since light-my-request is not even a runtime dependency, nanooptimizations don't matter as much.

@sirenkovladd
Copy link
Contributor Author

main

┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬─────────┐
│ (index) │            Task Name             │   ops/sec   │ Average Time (ns)  │  Margin   │ Samples │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼─────────┤
│    0    │            'Request'             │  '329,450'  │ 3035.358920737361  │ '±1.85%'  │ 164726  │
│    1    │         'Custom Request'         │  '23,323'   │ 42874.42253575769  │ '±3.09%'  │  11662  │
│    2    │      'Request With Cookies'      │  '203,543'  │ 4912.965682345484  │ '±1.81%'  │ 101772  │
│    3    │ 'Request With Cookies n payload' │  '209,126'  │ 4781.787882428501  │ '±2.12%'  │ 104564  │
│    4    │            'ParseUrl'            │ '1,000,530' │ 999.4697091393407  │ '±0.50%'  │ 500266  │
│    5    │       'ParseUrl and query'       │  '115,917'  │ 8626.819576562057  │ '±0.45%'  │  57959  │
│    6    │     'read request body JSON'     │  '80,225'   │ 12464.847208492907 │ '±1.41%'  │  40113  │
│    7    │    'read request body buffer'    │  '118,589'  │ 8432.443780856425  │ '±1.58%'  │  59295  │
│    8    │   'read request body readable'   │  '59,789'   │ 16725.446681331126 │ '±3.17%'  │  29895  │
│    9    │       'Response write end'       │  '32,769'   │ 30515.789490012376 │ '±11.38%' │  16385  │
│   10    │     'Response writeHead end'     │  '31,237'   │ 32013.13887242967  │ '±24.60%' │  15619  │
│   11    │          'base inject'           │  '18,700'   │ 53474.86442841651  │ '±14.01%' │  9367   │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴─────────┘

pr

┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬─────────┐
│ (index) │            Task Name             │   ops/sec   │ Average Time (ns)  │  Margin   │ Samples │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼─────────┤
│    0    │            'Request'             │  '288,357'  │ 3467.918693395486  │ '±1.80%'  │ 144179  │
│    1    │         'Custom Request'         │  '18,529'   │ 53968.52044884218  │ '±3.18%'  │  9265   │
│    2    │      'Request With Cookies'      │  '209,731'  │ 4768.001047761391  │ '±2.16%'  │ 104866  │
│    3    │ 'Request With Cookies n payload' │  '209,283'  │ 4778.204037601417  │ '±1.84%'  │ 104642  │
│    4    │            'ParseUrl'            │ '1,163,778' │ 859.2698052078658  │ '±0.46%'  │ 581890  │
│    5    │       'ParseUrl and query'       │  '125,576'  │  7963.26355244089  │ '±0.34%'  │  62789  │
│    6    │     'read request body JSON'     │  '82,015'   │ 12192.827821471206 │ '±1.48%'  │  41008  │
│    7    │    'read request body buffer'    │  '107,069'  │ 9339.767187469155  │ '±1.14%'  │  53535  │
│    8    │   'read request body readable'   │  '56,603'   │ 17666.900750931713 │ '±2.63%'  │  28302  │
│    9    │       'Response write end'       │  '42,223'   │ 23683.483707787367 │ '±9.79%'  │  21112  │
│   10    │     'Response writeHead end'     │  '45,347'   │ 22051.888650213063 │ '±10.48%' │  22674  │
│   11    │          'base inject'           │  '18,343'   │ 54516.203876144486 │ '±16.42%' │  9172   │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴─────────┘

@sirenkovladd
Copy link
Contributor Author

Some asynchronous tests have become faster

@kibertoad
Copy link
Member

kibertoad commented Sep 13, 2023

@sirenkovladd Request and Custom Request seem slower, though. Is that consistent? Is it clear why it's happening?

@sirenkovladd
Copy link
Contributor Author

CustomRequest - I'm not sure, but I believe that the constructor from parameters was not called in master
Request - difference from 3035ns to 3467ns, it seems not so critical

@kibertoad
Copy link
Member

@sirenkovladd Can you regenerate the benchmark after the fix?

@sirenkovladd
Copy link
Contributor Author

This benchmark already after commit, I just didn't push before text

@sirenkovladd
Copy link
Contributor Author

sirenkovladd commented Sep 13, 2023

But if CustomRequest is important, I can go back to the old code or leave only one call constructor

@kibertoad
Copy link
Member

If it was a prod runtime code, I would say that 2-3% perf difference is significant. For a test code, admittedly, less so. I'm neutral on this PR; I would say that class syntax is generally easier to maintain, but there is slight perf decrease.

@Uzlopak
Copy link
Contributor

Uzlopak commented Sep 13, 2023

Yeah the 3 % perf decrease is normal for node when transforming from prototype to class notation. That is also why transpiling classes in ts to ES5 makes them faster on node :D.

For me it is a hard -1

@sirenkovladd
Copy link
Contributor Author

I'm sure it's just an inaccuracy of the benchmark

I rerun the benchmark with no changes

┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬─────────┐
│ (index) │            Task Name             │   ops/sec   │ Average Time (ns)  │  Margin   │ Samples │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼─────────┤
│    0    │            'Request'             │  '342,368'  │ 2920.8293503720124 │ '±1.77%'  │ 171185  │
│    1    │         'Custom Request'         │  '18,553'   │ 53896.89576610405  │ '±3.37%'  │  9277   │
│    2    │      'Request With Cookies'      │  '213,738'  │ 4678.606858860959  │ '±1.81%'  │ 106870  │
│    3    │ 'Request With Cookies n payload' │  '200,590'  │ 4985.282412429522  │ '±1.68%'  │ 100296  │
│    4    │            'ParseUrl'            │ '1,104,978' │ 904.9952496104453  │ '±0.44%'  │ 552490  │
│    5    │       'ParseUrl and query'       │  '120,311'  │ 8311.760144715565  │ '±0.36%'  │  60156  │
│    6    │     'read request body JSON'     │  '79,679'   │ 12550.312998783158 │ '±1.24%'  │  39840  │
│    7    │    'read request body buffer'    │  '102,721'  │  9735.03884363322  │ '±1.37%'  │  51361  │
│    8    │   'read request body readable'   │  '53,861'   │  18566.0905689239  │ '±3.02%'  │  26931  │
│    9    │       'Response write end'       │  '44,957'   │ 22243.16395936188  │ '±9.42%'  │  22479  │
│   10    │     'Response writeHead end'     │  '40,903'   │ 24447.907944297942 │ '±46.73%' │  22873  │
│   11    │          'base inject'           │  '19,501'   │ 51276.991620037865 │ '±18.75%' │  9751   │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴─────────┘

@sirenkovladd
Copy link
Contributor Author

It is the PR

@sirenkovladd
Copy link
Contributor Author

make changes

const suite = new Bench({
  time: 10000,
})

master

> node --max-old-space-size=4096 benchmark/benchmark.js
┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬──────────┐
│ (index) │            Task Name             │   ops/sec   │ Average Time (ns)  │  Margin   │ Samples  │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼──────────┤
│    0    │            'Request'             │  '320,091'  │ 3124.108362733398  │ '±0.46%'  │ 3200914  │
│    1    │         'Custom Request'         │  '22,775'   │  43907.7389810478  │ '±0.80%'  │  227751  │
│    2    │      'Request With Cookies'      │  '222,849'  │ 4487.3431490487055 │ '±0.41%'  │ 2228491  │
│    3    │ 'Request With Cookies n payload' │  '202,403'  │ 4940.621643005308  │ '±0.39%'  │ 2024037  │
│    4    │            'ParseUrl'            │ '1,091,681' │ 916.0184890339996  │ '±0.06%'  │ 10916811 │
│    5    │       'ParseUrl and query'       │  '121,373'  │  8239.01830652675  │ '±0.10%'  │ 1213737  │
│    6    │     'read request body JSON'     │  '89,870'   │ 11127.18101895581  │ '±0.38%'  │  898701  │
│    7    │    'read request body buffer'    │  '110,225'  │ 9072.312025407547  │ '±0.42%'  │ 1102255  │
│    8    │   'read request body readable'   │  '57,066'   │ 17523.288659007623 │ '±0.78%'  │  570670  │
│    9    │       'Response write end'       │  '27,524'   │  36330.7235054785  │ '±18.16%' │  296958  │
│   10    │     'Response writeHead end'     │  '21,740'   │ 45996.87746989476  │ '±41.13%' │  217541  │
│   11    │          'base inject'           │  '17,163'   │ 58262.287273222195 │ '±14.28%' │  171638  │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴──────────┘

pr

> node --max-old-space-size=4096 benchmark/benchmark.js
┌─────────┬──────────────────────────────────┬─────────────┬────────────────────┬───────────┬──────────┐
│ (index) │            Task Name             │   ops/sec   │ Average Time (ns)  │  Margin   │ Samples  │
├─────────┼──────────────────────────────────┼─────────────┼────────────────────┼───────────┼──────────┤
│    0    │            'Request'             │  '353,072'  │ 2832.2771499784844 │ '±0.45%'  │ 3530728  │
│    1    │         'Custom Request'         │  '20,797'   │ 48081.80431725047  │ '±0.94%'  │  207979  │
│    2    │      'Request With Cookies'      │  '241,096'  │ 4147.717536681935  │ '±0.35%'  │ 2410965  │
│    3    │ 'Request With Cookies n payload' │  '214,692'  │ 4657.815505150625  │ '±0.42%'  │ 2146930  │
│    4    │            'ParseUrl'            │ '1,217,841' │ 821.1249368529127  │ '±0.06%'  │ 12178415 │
│    5    │       'ParseUrl and query'       │  '123,697'  │ 8084.252633591479  │ '±0.09%'  │ 1236973  │
│    6    │     'read request body JSON'     │  '89,383'   │ 11187.745497382934 │ '±0.33%'  │  893836  │
│    7    │    'read request body buffer'    │  '107,618'  │  9292.07767648609  │ '±0.35%'  │ 1076186  │
│    8    │   'read request body readable'   │  '65,674'   │ 15226.678770467786 │ '±0.73%'  │  656749  │
│    9    │       'Response write end'       │  '37,666'   │ 26548.970979704438 │ '±23.18%' │  376871  │
│   10    │     'Response writeHead end'     │  '27,325'   │ 36596.434051626464 │ '±46.23%' │  273251  │
│   11    │          'base inject'           │  '19,322'   │ 51753.245753345036 │ '±28.56%' │  193301  │
└─────────┴──────────────────────────────────┴─────────────┴────────────────────┴───────────┴──────────┘

@sirenkovladd
Copy link
Contributor Author

Good news like for me

@Uzlopak
Copy link
Contributor

Uzlopak commented Sep 13, 2023

And which node version did you use? Why do you increased the max old cache?

@sirenkovladd
Copy link
Contributor Author

node v20.6.0
Increased because I got a memory heap error

@sirenkovladd
Copy link
Contributor Author

sirenkovladd commented Sep 13, 2023

So I can say that everything will be faster, except Custom Request
but it can be corrected

┌─────────┬───────────────────────────┬──────────┬────────────────────┬──────────┬─────────┐
│ (index) │         Task Name         │ ops/sec  │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────────────────────┼──────────┼────────────────────┼──────────┼─────────┤
│    0    │    'Custom Request PR'    │ '19,372' │ 51619.85320490767  │ '±0.71%' │ 290587  │
│    1    │ 'Custom Request PR light' │ '35,256' │ 28363.738704998384 │ '±0.89%' │ 528845  │
│    2    │   'Custom Request main'   │ '23,795' │ 42024.161631877156 │ '±0.78%' │ 356938  │
└─────────┴───────────────────────────┴──────────┴────────────────────┴──────────┴─────────┘

where PR light

function getCustomRequestLight (CustomRequest) {
  class _CustomLMRRequest {
    constructor (...opt) {
      Object.assign(this, new Request(...opt))
    }
  }
  Object.setPrototypeOf(_CustomLMRRequest.prototype, CustomRequest.prototype)
  Object.getOwnPropertyNames(Request.prototype)
    .filter(prop => prop !== 'constructor')
    .forEach(prop => { _CustomLMRRequest.prototype[prop] = Request.prototype[prop] })
  return _CustomLMRRequest
}

difference between PR and PR light
PR - calls Request constructor and constructor from options
PR light - only Request constructor (like CustomRequest from master)

@sirenkovladd
Copy link
Contributor Author

sirenkovladd commented Sep 13, 2023

I think it's a bug that CustomRequest doesn't call the constructor from options.Request and I vote for Custom Request PR

@sirenkovladd
Copy link
Contributor Author

@Uzlopak @kibertoad

Copy link
Contributor

@Uzlopak Uzlopak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fastify unit tests break with this PR

@mcollina
Copy link
Member

I feel that the risk of this PR are really not worth the reward. I don't think we should migrate to class.

@Uzlopak
Copy link
Contributor

Uzlopak commented Oct 26, 2023

I think it has value, but it should be done with care.

@sirenkovladd
Copy link
Contributor Author

I can create a new PR with smaller changes if it helps to merge faster

@Uzlopak
Copy link
Contributor

Uzlopak commented Oct 26, 2023

I thinks so, but maybe wait till we merged the fastify integration testing. Then we see if it breaks in fastify

@sirenkovladd
Copy link
Contributor Author

I think my PR actually broke 1 test in fastify (I tested it) https://github.com/fastify/fastify/blob/main/test/childLoggerFactory.test.js#L61C1-L91

This test expects an error when calling inject when you also provide a callback function

And in this PR I changed when you provide a callback, all errors will be injected there
and the inject function does not emit any error to the environment

With Promise, everything works fine

@sirenkovladd sirenkovladd mentioned this pull request Oct 28, 2023
4 tasks
@sirenkovladd
Copy link
Contributor Author

Make sub PR (same change, but slightly edited function doInject to solve CICD)
#271

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants