Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Network service crashed, restarting service. #31675

Closed
3 tasks done
DrNio13 opened this issue Nov 2, 2021 · 11 comments · Fixed by #33204
Closed
3 tasks done

[Bug]: Network service crashed, restarting service. #31675

DrNio13 opened this issue Nov 2, 2021 · 11 comments · Fixed by #33204
Assignees
Labels
bug 🪲 has-repro-gist Issue can be reproduced with code at https://gist.github.com/ platform/all status/confirmed A maintainer reproduced the bug or agreed with the feature

Comments

@DrNio13
Copy link

DrNio13 commented Nov 2, 2021

Preflight Checklist

Electron Version

15.0.2

What operating system are you using?

Ubuntu

Operating System Version

Linux rrouwprlc0068 5.4.0-89-generic #100-Ubuntu SMP Fri Sep 24 14:50:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux, Ubuntu 20.04.3 LTS, GNOME Version 3.36.8, OS Type 64-bit

What arch are you using?

x64

Last Known Working Electron version

No response

Expected Behavior

Network service should not crash and the error [15:1102/154337.143944:ERROR:network_service_instance_impl.cc(333)] Network service crashed, restarting service., to not appear in the logs.

Actual Behavior

Sporadically, when I run Electron inside a Docker container, I get the following error:

[15:1102/154337.143944:ERROR:network_service_instance_impl.cc(333)] Network service crashed, restarting service.

Which causes partially rendering of the front-end application that is loaded with win.loadURL on localhost.

The issue seems very similar to https://bbs.archlinux.org/viewtopic.php?id=268123, but nothing from the suggestions works consinstently.

Also, when closing the Electron window, I get an error

Error: ERR_FAILED (-2) loading 'https://localhost:{port}/{path_to}/index.html'
    at rejectAndCleanup (node:electron/js2c/browser_init:165:7486)
    at Object.stopLoadingListener (node:electron/js2c/browser_init:165:7861)
    at Object.emit (node:events:394:28) {
  errno: -2,
  code: 'ERR_FAILED',
  url: 'https://localhost:{port}/{path_to}/index.html'
} [EOL]

Which is caught inside the main.js when executing await win.loadURL(url)

Also, in the browser window console, there is an error:

ERROR Error: Uncaught (in promise): ChunkLoadError: Loading chunk 108 failed.
    (timeout: https://localhost:{port}/{path_to}/108.da4993913b4a27951976.js)
    ChunkLoadError: Loading chunk 108 failed.

Any ideas on the issue?

Testcase Gist URL

No response

Additional Information

Chromedriver 95.0.0 v

@DrNio13 DrNio13 changed the title [Bug]: [Bug]: Network service crashed, restarting service. Nov 2, 2021
@zcbenz
Copy link
Member

zcbenz commented Nov 3, 2021

Will you see the same error if you run chromium browser in docker?
https://download-chromium.appspot.com/?platform=Linux_x64&type=snapshots

@DrNio13
Copy link
Author

DrNio13 commented Nov 3, 2021

Probably not. Can it be related to SignalR/Web Socket connection?

The error occurs only in certain situation, when Electron loads an Angular app with win.loadURL while the backend services are already running and the Network Service crashed error occurs somewhere near trying to establish a signalR connection.

  [2021-11-03 07:24:09.524Z][UI][info] OPTIONS request to https://localhost:5001/notify/negotiate?negotiateVersion=1 [EOL]
  
  [2021-11-03 07:24:09.528Z][UI][info] GET request to https://localhost:5001/api/foo/bar [EOL]
  
  [2021-11-03 07:24:09.533Z][UI][info] localhost net::OK [EOL]
  
  [2021-11-03 07:24:09.533Z][UI][info] localhost net::OK [EOL]
  
  [15:1103/072409.750221:ERROR:network_service_instance_impl.cc(333)] Network service crashed, restarting service.

Browser logs

vendor.js:14942 ERROR Error: Uncaught (in promise): ChunkLoadError: Loading chunk foo_module_ts failed.
    (timeout: https://localhost:9990/app/src_app_shomescreen_module_ts.js)
    ChunkLoadError: Loading chunk src_app_homescreen_module_ts failed.
    (timeout: https://localhost:9990/app/src_app_homescreen_module_ts.js)
        at Object.__webpack_require__.f.j (runtime.js:262)
        at runtime.js:126
        at Array.reduce (<anonymous>)
        at Function.__webpack_require__.e (runtime.js:125)
        at loadChildren (main.js:44)
        at RouterConfigLoader.loadModuleFactory (vendor.js:47724)
        at RouterConfigLoader.load (vendor.js:47698)
        at MergeMapSubscriber.project (vendor.js:46830)
        at MergeMapSubscriber._tryNext (vendor.js:128343)
        at MergeMapSubscriber._next (vendor.js:128333)
        at resolvePromise (polyfills.js:1222)
        at resolvePromise (polyfills.js:1176)
        at polyfills.js:1288
        at ZoneDelegate.invokeTask (polyfills.js:415)
        at Object.onInvokeTask (vendor.js:37122)
        at ZoneDelegate.invokeTask (polyfills.js:414)
        at Zone.runTask (polyfills.js:187)
        at drainMicroTaskQueue (polyfills.js:591)
        at invokeTask (polyfills.js:500)
        at ZoneTask.invoke (polyfills.js:485)

@DrNio13
Copy link
Author

DrNio13 commented Nov 3, 2021

I managed to find what triggers the issue, not sure about the root cause though.

Here is a part of the main.ts file based on https://www.electronjs.org/docs/tutorial/quick-start

async function createBrowserWindow() {
    const win = new BrowserWindow({
        width: 2000,
        height: 1000,
        webPreferences: {
            nodeIntegration: true,
            preload: path.join(__dirname, 'preload.js')
        }
    });

    win.webContents.session.webRequest.onBeforeSendHeaders(filter, (details, callback) => {
      // content here
    });

    win.webContents.session.webRequest.onHeadersReceived((details, callback) => {
        callback(details);
    });

    // win.webContents.session.setCertificateVerifyProc((request, callback) => {
    //     log.info('setCertificateVerifyProc hook', request.hostname, request.verificationResult);
    //     callback(-3);
    // });

    win.webContents.on('did-fail-load', (event, code, desc, url, isMainFrame) => {
        // content here
    });

    try {
        await win.loadURL(config.uiUrl);
    } catch (e) {
        log.error('Error loading the app', e);
    }
}

if (config.ignoreCertificateErrors) {
    app.commandLine.appendSwitch('ignore-certificate-errors', 'true');
    process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = '0';
}

app.disableHardwareAcceleration();

app.whenReady().then(async () => {
    await manageCertificates();
    log.info(`App startup`);
    createBrowserWindow();
})

Apparently the issue disappears when commenting out the setCertificateVerifyProc hook. Any ideas why?

I tried setting it also to callback(0) but the issue still occurred sporadically.

Couple of more questions/thoughts.

Omitting this hook from main.ts is the same as calling it like setCertificateVerifyProc(null)?

Also, from the logs I was able to see that at least 2 requests were executed before the setCertificateVerifyProc hook. I would expect the hook to be executed before any http requests. Is this a bug?

According to the logs, the hook seems to be called twice. Is this a bug?


[2021-11-03 13:58:10.647Z] OPTIONS request to https://localhost:5001/notify/negotiate?negotiateVersion=1 [EOL]

[2021-11-03 13:58:10.650Z] GET request to https://localhost:5001/api/foo/bar [EOL]

[2021-11-03 13:58:10.655Z] setCertificateVerifyProc hook localhost net::OK [EOL]

[2021-11-03 13:58:10.656Z] setCertificateVerifyProc hook localhost net::OK [EOL]

[15:1103/135810.901578:ERROR:network_service_instance_impl.cc(333)] Network service crashed, restarting service.

Thanks for the support :)

@zcbenz zcbenz added has-repro-comment Issue has repro in comments platform/linux labels Nov 4, 2021
@lauw70
Copy link

lauw70 commented Feb 21, 2022

I can confirm that this sporadic issue is triggered by setting the setCertificateVerifyProc hook.

We are running into this issue on both Mac and Windows on v16.0.2 and v17.0.1

@lauw70
Copy link

lauw70 commented Mar 8, 2022

@zcbenz
@DrNio13

I made two Fiddles:

  1. To reproduce the bug: https://gist.github.com/07754946329c5fe56cd972088986ad59
  2. A "hack" we found that fixes the problem (edit: most of the time): https://gist.github.com/20f2c7027146fbfb9e3a7406aa75a745

The difference is in the setCertificateVerifyProc handler:

+ const seenHostnames = new Set();
  mainWindow.webContents.session.setCertificateVerifyProc((request, callback) => {
    console.log('verify', request.hostname);
-  callback(-3);
+  if (!seenHostnames.has(request.hostname)) {
+      seenHostnames.add(request.hostname);
+      callback(-3);
+  }
  });

Note: I'm not sure if this hack introduces a memory leak. Since we're not calling callback(-3) every time like the handler would expect.

My hypothesis on why this bug happens

If you load a page (loadURL()) that subsequently loads numerous resources from a different hostname. The handler is called multiple times for the same hostname causing a read/write race condition or an asynchronous write to the cache that stores the verification results per hostname. This causes the network service to crash (see logs below).

Log from Fiddle 1

Note how the page stops loading once the network service crashes. We would expect many more domains to be verified, since this particular URL loads resources from many other domains.

Electron v17.0.1 started.
verify ans.app
verify fonts.googleapis.com
verify fonts.googleapis.com
verify fonts.googleapis.com
verify assets.ans.app
verify assets.ans.app
[88330:0308/150447.513377:ERROR:network_service_instance_impl.cc(975)] Network service crashed, restarting service.

Logs from Fiddle 2

Electron v17.0.1 started.
verify ans.app
verify fonts.googleapis.com
verify fonts.googleapis.com
verify fonts.googleapis.com
verify assets.ans.app
verify www.recaptcha.net
verify www.gstatic.com
verify www.googletagmanager.com
verify static.cloudflareinsights.com
verify fonts.gstatic.com
verify code.sorryapp.com
verify www.google-analytics.com
verify js-agent.newrelic.com
verify bam.eu01.nr-data.net
verify ro-api.sorryapp.com

@lauw70
Copy link

lauw70 commented Mar 8, 2022

I managed to reproduce the bug on these versions:

  • v14.2.16
  • v15.4.0
  • v16.0.10
  • v17.0.1
  • v17.1.1

using the first Fiddle.

Sometimes have you to run it a couple of times for the bug to show.

@lauw70
Copy link

lauw70 commented Mar 8, 2022

@nornagon This seems to be related to a patch you made a year ago: #28358

@nornagon
Copy link
Member

nornagon commented Mar 8, 2022

Crash dump:

Thread 6 (crashed)
 0  Electron Framework!net::CertVerifyResult::operator=(net::CertVerifyResult const&) [atomic : 1050 + 0x0]
    rax = 0xffffffff00000000   rdx = 0x000070000e7387e8
    rcx = 0xaaaaaaaaaaaaaaaa   rbx = 0x000070000e7387e8
    rsi = 0x0000003c00a0fc10   rdi = 0x000070000e7387e8
    rbp = 0x000070000e7387d0   rsp = 0x000070000e7387b0
     r8 = 0x0000003c00a0fc10    r9 = 0x0000000000000000
    r10 = 0x0000000120312490   r11 = 0x00000001243a2d70
    r12 = 0x000070000e7387e8   r13 = 0x0000003c00a0fc10
    r14 = 0x0000003c00a0fc10   r15 = 0x002f384f0803030a
    rip = 0x00000001203173b9
    Found by: given as instruction pointer in context
 1  Electron Framework!net::CachingCertVerifier::AddResultToCache(unsigned int, net::CertVerifier::RequestParams const&, base::Time, net::CertVerifyResult const&, int) [caching_cert_verifier.cc : 170 + 0x8]
    rbp = 0x000070000e7388c0   rsp = 0x000070000e7387e0
    rip = 0x000000012031259d
    Found by: previous frame's frame pointer
 2  Electron Framework!net::CachingCertVerifier::OnRequestFinished(unsigned int, net::CertVerifier::RequestParams const&, base::Time, base::OnceCallback<void (int)>, net::CertVerifyResult*, int) [caching_cert_verifier.cc : 128 + 0xb]
    rbp = 0x000070000e7388f0   rsp = 0x000070000e7388d0
    rip = 0x00000001203124ac
    Found by: previous frame's frame pointer
 3  Electron Framework!base::internal::Invoker<base::internal::BindState<void (net::CachingCertVerifier::*)(unsigned int, net::CertVerifier::RequestParams const&, base::Time, base::OnceCallback<void (int)>, net::CertVerifyResult*, int), base::internal::UnretainedWrapper<net::CachingCertVerifier>, unsigned int, net::CertVerifier::RequestParams, base::Time, base::OnceCallback<void (int)>, base::internal::UnretainedWrapper<net::CertVerifyResult> >, void (int)>::RunOnce(base::internal::BindStateBase*, int) [bind_internal.h : 535 + 0xc]
    rbp = 0x000070000e738920   rsp = 0x000070000e738900
    rip = 0x00000001203128e7
    Found by: previous frame's frame pointer
 4  Electron Framework!network::RemoteCertVerifier::OnRemoteResponse(net::CertVerifier::RequestParams const&, net::CertVerifyResult*, int, base::OnceCallback<void (int)>, int, net::CertVerifyResult const&) [network_context.cc : 0 + 0x3]
    rbp = 0x000070000e738960   rsp = 0x000070000e738930
    rip = 0x00000001211a9687
    Found by: previous frame's frame pointer
 5  Electron Framework!base::internal::Invoker<base::internal::BindState<void (network::RemoteCertVerifier::*)(net::CertVerifier::RequestParams const&, net::CertVerifyResult*, int, base::OnceCallback<void (int)>, int, net::CertVerifyResult const&), base::internal::UnretainedWrapper<network::RemoteCertVerifier>, net::CertVerifier::RequestParams, base::internal::UnretainedWrapper<net::CertVerifyResult>, int, base::OnceCallback<void (int)> >, void (int, net::CertVerifyResult const&)>::RunOnce(base::internal::BindStateBase*, int, net::CertVerifyResult const&) [bind_internal.h : 535 + 0xd]
    rbp = 0x000070000e738990   rsp = 0x000070000e738970
    rip = 0x00000001211a9706
    Found by: previous frame's frame pointer
 6  Electron Framework!network::mojom::CertVerifierClient_Verify_ForwardToCallback::Accept(mojo::Message*) [callback.h : 142 + 0x6]
    rbp = 0x000070000e738a50   rsp = 0x000070000e7389a0
    rip = 0x000000011d48ba45
    Found by: previous frame's frame pointer
 7  Electron Framework!mojo::InterfaceEndpointClient::HandleIncomingMessageThunk::Accept(mojo::Message*) [interface_endpoint_client.cc : 895 + 0xd]
    rbp = 0x000070000e738ac0   rsp = 0x000070000e738a60
    rip = 0x000000012029a599
    Found by: previous frame's frame pointer
 8  Electron Framework!mojo::MessageDispatcher::Accept(mojo::Message*) [message_dispatcher.cc : 43 + 0x9]
    rbp = 0x000070000e738b10   rsp = 0x000070000e738ad0
    rip = 0x000000012029e615
    Found by: previous frame's frame pointer
 9  Electron Framework!mojo::InterfaceEndpointClient::HandleIncomingMessage(mojo::Message*) [interface_endpoint_client.cc : 657 + 0x5]
    rbp = 0x000070000e738c70   rsp = 0x000070000e738b20
    rip = 0x000000012029ba5d
    Found by: previous frame's frame pointer
10  Electron Framework!mojo::internal::MultiplexRouter::Accept(mojo::Message*) [multiplex_router.cc : 1104 + 0xb]
    rbp = 0x000070000e738ee0   rsp = 0x000070000e738c80
    rip = 0x00000001202a8adf
    Found by: previous frame's frame pointer
11  Electron Framework!mojo::MessageDispatcher::Accept(mojo::Message*) [message_dispatcher.cc : 43 + 0x9]
    rbp = 0x000070000e738f30   rsp = 0x000070000e738ef0
    rip = 0x000000012029e615
    Found by: previous frame's frame pointer
12  Electron Framework!mojo::Connector::OnWatcherHandleReady(unsigned int) [connector.cc : 556 + 0xd]
    rbp = 0x000070000e7391b0   rsp = 0x000070000e738f40
    rip = 0x0000000120295b07
    Found by: previous frame's frame pointer
13  Electron Framework!mojo::SimpleWatcher::Context::CallNotify(MojoTrapEvent const*) [callback.h : 241 + 0x6]
    rbp = 0x000070000e7392a0   rsp = 0x000070000e7391c0
    rip = 0x00000001202ba1d6
    Found by: previous frame's frame pointer
14  Electron Framework!mojo::core::WatcherDispatcher::InvokeWatchCallback(unsigned long, unsigned int, mojo::core::HandleSignalsState const&, unsigned int) [watcher_dispatcher.cc : 93 + 0x4]
    rbp = 0x000070000e7392f0   rsp = 0x000070000e7392b0
    rip = 0x000000011e0e90bb
    Found by: previous frame's frame pointer
15  Electron Framework!mojo::core::Watch::InvokeCallback(unsigned int, mojo::core::HandleSignalsState const&, unsigned int) [watch.cc : 78 + 0xe]
    rbp = 0x000070000e739330   rsp = 0x000070000e739300
    rip = 0x000000011e0e84a0
    Found by: previous frame's frame pointer
16  Electron Framework!mojo::core::RequestContext::~RequestContext() [request_context.cc : 72 + 0xd]
    rbp = 0x000070000e7394e0   rsp = 0x000070000e739340
    rip = 0x000000011e0e33b5
    Found by: previous frame's frame pointer
17  Electron Framework!mojo::core::NodeChannel::OnChannelMessage(void const*, unsigned long, std::__1::vector<mojo::PlatformHandle, std::__1::allocator<mojo::PlatformHandle> >) [node_channel.cc : 834 + 0x5]
    rbp = 0x000070000e739800   rsp = 0x000070000e7394f0
    rip = 0x000000011e0d9f97
    Found by: previous frame's frame pointer
18  Electron Framework!mojo::core::Channel::TryDispatchMessage(base::span<char const, 18446744073709551615ul>, unsigned long*) [channel.cc : 933 + 0x10]
    rbp = 0x000070000e739a00   rsp = 0x000070000e739810
    rip = 0x000000011e0c8eec
    Found by: previous frame's frame pointer
19  Electron Framework!non-virtual thunk to mojo::core::(anonymous namespace)::ChannelMac::OnMachMessageReceived(unsigned int) [channel_mac.cc : 654 + 0x11]
    rbp = 0x000070000e739bf0   rsp = 0x000070000e739a10
    rip = 0x000000011e0ee04f
    Found by: previous frame's frame pointer
20  Electron Framework!base::MessagePumpKqueue::Run(base::MessagePump::Delegate*) [message_pump_kqueue.cc : 507 + 0x9]
    rbp = 0x000070000e739de0   rsp = 0x000070000e739c00
    rip = 0x000000011ffda696
    Found by: previous frame's frame pointer
21  Electron Framework!base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool, base::TimeDelta) [thread_controller_with_message_pump_impl.cc : 468 + 0x6]
    rbp = 0x000070000e739e20   rsp = 0x000070000e739df0
    rip = 0x000000011ff717ea
    Found by: previous frame's frame pointer
22  Electron Framework!base::RunLoop::Run(base::Location const&) [run_loop.cc : 140 + 0x13]
    rbp = 0x000070000e739ed0   rsp = 0x000070000e739e30
    rip = 0x000000011ff3b711
    Found by: previous frame's frame pointer
23  Electron Framework!base::Thread::Run(base::RunLoop*) [thread.cc : 334 + 0x2a]
    rbp = 0x000070000e739f10   rsp = 0x000070000e739ee0
    rip = 0x000000011ff8d778
    Found by: previous frame's frame pointer
24  Electron Framework!base::Thread::ThreadMain() [thread.cc : 405 + 0xd]
    rbp = 0x000070000e739f80   rsp = 0x000070000e739f20
    rip = 0x000000011ff8d90d
    Found by: previous frame's frame pointer
25  Electron Framework!base::(anonymous namespace)::ThreadFunc(void*) [platform_thread_posix.cc : 99 + 0x9]
    rbp = 0x000070000e739fb0   rsp = 0x000070000e739f90
    rip = 0x000000011ffaa2e5
    Found by: previous frame's frame pointer
26  libsystem_pthread.dylib!_pthread_start + 0x7d
    rbp = 0x000070000e739fd0   rsp = 0x000070000e739fc0
    rip = 0x00007ff8058e84f4
    Found by: previous frame's frame pointer
27  libsystem_pthread.dylib!thread_start + 0xf
    rbx = 0x0000000000000000   rbp = 0x000070000e739ff0
    rsp = 0x000070000e739fe0   rip = 0x00007ff8058e400f
    Found by: call frame info

That's crashing here, so verify_result must be bad by the time AddResultToCache is called.

Looks like what's happening is that the SSLClientSocketImpl is getting destroyed while the CertificateVerifyProc is in-flight due to a SPDY upgrade:

2   Electron Framework                  0x0000000121542ce8 net::SSLClientSocketImpl::~SSLClientSocketImpl() + 136
3   Electron Framework                  0x0000000121542f1e net::SSLClientSocketImpl::~SSLClientSocketImpl() + 14
4   Electron Framework                  0x00000001215498a9 net::SSLConnectJob::~SSLConnectJob() + 233
5   Electron Framework                  0x000000012154996e net::SSLConnectJob::~SSLConnectJob() + 14
6   Electron Framework                  0x00000001215527ea net::TransportClientSocketPool::RemoveConnectJob(net::ConnectJob*, net::TransportClientSocketPool::Group*) + 218
7   Electron Framework                  0x0000000121551bfd net::TransportClientSocketPool::CancelRequest(net::ClientSocketPool::GroupId const&, net::ClientSocketHandle*, bool) + 1053
8   Electron Framework                  0x000000012152b9bb net::ClientSocketHandle::ResetInternal(bool, bool) + 315
9   Electron Framework                  0x000000012152be04 net::ClientSocketHandle::ResetAndCloseSocket() + 52
10  Electron Framework                  0x00000001214ac07c net::HttpStreamFactory::Job::OnSpdySessionAvailable(base::WeakPtr<net::SpdySession>) + 108
11  Electron Framework                  0x0000000121590612 net::SpdySessionPool::UpdatePendingRequests(net::SpdySessionKey const&) + 802
12  Electron Framework                  0x000000012159249a base::internal::Invoker<base::internal::BindState<void (net::SpdySessionPool::*)(net::SpdySessionKey const&), base::WeakPtr<net::SpdySessionPool>, net::SpdySessionKey>, void ()>::RunOnce(base::internal::BindStateBase*) + 138

We should be able to detect this and tear down the in-flight verify request gracefully when the underlying socket is destroyed to avoid crashing.

@nornagon nornagon closed this as completed Mar 8, 2022
@nornagon nornagon added platform/all status/confirmed A maintainer reproduced the bug or agreed with the feature has-repro-gist Issue can be reproduced with code at https://gist.github.com/ and removed platform/linux has-repro-comment Issue has repro in comments labels Mar 8, 2022
@nornagon nornagon self-assigned this Mar 8, 2022
@nornagon nornagon reopened this Mar 8, 2022
@nornagon
Copy link
Member

nornagon commented Mar 8, 2022

Looks like the way to get notified about request cancellation is by setting out_req to something here:

Destruction of that object signals cancellation: https://source.chromium.org/chromium/chromium/src/+/main:net/cert/cert_verifier.h;l=83;drc=459dba1945dd88c8dd98c6fbfd617f84e62eba99

CoalescingCertVerifier's logic is a bit complicated but does use this signal, as an example in upstream.

@nornagon
Copy link
Member

nornagon commented Mar 9, 2022

@lauw70 thanks for the detailed repro steps. I was able to track down the cause of the issue and #33204 fixes it I believe.

@lauw70
Copy link

lauw70 commented Mar 9, 2022

@nornagon thank you so much for putting in the time. Currently this is last blocking bug for our next production release. You can't imagine how much this helps us out.

I'm almost afraid to ask, but how much time does it usually take before a PR like this hits a stable release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🪲 has-repro-gist Issue can be reproduced with code at https://gist.github.com/ platform/all status/confirmed A maintainer reproduced the bug or agreed with the feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants