Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video Glitches with Fetch Loader #3747

Closed
4 of 5 tasks
softworkz opened this issue Apr 8, 2021 · 6 comments · Fixed by #3755
Closed
4 of 5 tasks

Video Glitches with Fetch Loader #3747

softworkz opened this issue Apr 8, 2021 · 6 comments · Fixed by #3755
Labels

Comments

@softworkz
Copy link
Contributor

softworkz commented Apr 8, 2021

What version of Hls.js are you using?

1.0.0

What browser and OS (including versions) are you using?

Chrome 89, Win 10

Test stream:

HLS Test Stream as ZIP:
https://github.com/softworkz/hlstestfiles/releases/tag/2.0

Configuration:

Without Fetch Loader

{
  "debug": true,
  "enableWorker": true,
  "lowLatencyMode": true,
  "backBufferLength": 90
}

With Fetch Loader

{
  "debug": true,
  "enableWorker": true,
  "lowLatencyMode": true,
  "backBufferLength": 90,
  "progressive": true
}

Checklist

Steps to reproduce

  1. Play with XHR Loader: Video plays flawlessly
  2. Play with Fetch Loader: Video shows lots of visual "glitches"

Examples

image

image

History

The last version where it has been working is 1.0.0 alpha1.

From then on, it became continuously worse:

  • 1.0.0 alpha2: Playback speed issues: plays double or half speed
  • 1.0.0 alpha5 and beta 1: video/audio off-sync, audio sequences (few seconds) are being repeated
  • 1.0.0 RC1 up to final release: Visual glitches/artifacts as shown in the screenshots above

Please let me know when you need more information!

@robwalch
Copy link
Collaborator

robwalch commented Apr 8, 2021

progressive has been marked experimental. Support for this feature will be limited. I recommend against using it unless you can contribute fixes and help maintain the feature.

@softworkz
Copy link
Contributor Author

progressive has been marked experimental. Support for this feature will be limited. I recommend against using it unless you can contribute fixes and help maintain the feature.

That's exactly what I've hoped to avoid ;-)

I suppose you're currently the only one who knows how it works in detail?

@robwalch
Copy link
Collaborator

robwalch commented Apr 8, 2021

I suppose you're currently the only one who knows how it works in detail?

@johnBartos did a lot of the initial work. The changes over the last year and decision to mark it experimental came with the shift from LHLS to LL-HLS (away from Chunked Transfer in favor of the updated HLS spec) and the passing of the torch to me.

That's exactly what I've hoped to avoid ;-)

Well if you can be bothered to take a look, here are some pointers:

The fetch loader's highWaterMark determines how many bytes to load before pushing them through the transmux pipeline. This is set to 128kb. It's mainly there for tuning performance, but can also be a factor in issues similar to the one you are seeing where AVC samples might be configured in a way that pushing a partial segment or GoP is problematic. I don't have more specifics other than there may be cases where the transmuxer should wait for more data or samples before appending but it does not (could be something else entirely but it's usually something like that). Disabling progressive eliminates that issue since what is pushed through is determined by the stream rather than hls.js or the whatever bytes were pushed through the fetch loader (which other than setting highWaterMark we don't have control over currently. There might be some lessons to be learned from dash.js or other projects that excel at this. I don't know as I cannot dedicate the time and effort required to support this with everything on the table.

const MIN_CHUNK_SIZE = Math.pow(2, 17); // 128kb

The "progressive" streamed partials go through

  • progressCallback: FragmentLoadProgressCallback
    • _handleFragmentLoadProgress
    • transmuxer. push
    • transmux
    • _handleTransmuxComplete
    • this repeats until loading is done and these steps finish up:
    • _handleFragmentLoadComplete
    • _handleTransmuxerFlush

The transmuxer is where it gets tricky since the path changes with enableWorker. You want to disable the worker when debugging so you can step through everything in the main event loop and get console logs. The content-type (ts, aac, mp3, m4s) also determines which demuxer/remuxer pair is used on the incoming media.

@softworkz
Copy link
Contributor Author

Thanks a ton for the comprehensive reply. This is very helpful and time-saving.

Basically, I'll need a combination of post-alpha1 and current version. alpha1 didn't swallow any video data but starts by playing video for 1-2s, then hangs for 1-2s and then plays normally. The release version doesn't do this, but "swallows" some video data. Maybe it can be approached from the change history instead - I'll have to see..

Thanks again!

@robwalch
Copy link
Collaborator

robwalch commented Apr 8, 2021

@softworkz np and sorry for the regressions there. Let me know if you have questions about diffs in any of those areas. One big change that happened late in the beta/rc period was that I noticed the pipeline was still splitting segments into two appends when progressive was disabled (push always left a little something for flush 🤮). This was really important to fix for the default playback path and the progressive path may have been impacted as a result. The history on tsdemuxer/mp4-remuxer or mp4-passthrough (depending on whether you're looking at ts or fmp4 streams) will highlight some of that work.

Isolating the issue to a commit and hosting the sample stream would go a long way to making this easier for me to troubleshoot.

@softworkz
Copy link
Contributor Author

OK, great! I'll try to get to that point and let you know how far I get..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants