Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slow inference #1016

Open
1 of 5 tasks
luyao-cv opened this issue Nov 10, 2023 · 1 comment
Open
1 of 5 tasks

slow inference #1016

luyao-cv opened this issue Nov 10, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@luyao-cv
Copy link

Is your feature request related to a problem? Please describe.

总共推理要经过四个模型,

  1. ppg编码, whisper ppg模型,模型比较大。 这块占比比较大,推理速度要40s。
  2. hubert编码,推理速度要10s。
  3. pitch音高编码,推理速度要10s。
  4. svc_infer,推理速度10s。
    1-2-3是串行的,为了省显存

Describe alternatives you've considered

1

Additional context

1

Code of Conduct

  • I agree to follow this project's Code of Conduct

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.
@luyao-cv luyao-cv added the enhancement New feature or request label Nov 10, 2023
@luyao-cv luyao-cv changed the title 推理速度很慢,有什么解决的办法吗 slow inference Nov 10, 2023
@34j
Copy link
Collaborator

34j commented Nov 19, 2023

It is true that pitch inference and ContentVec inference can be parallelized, but in practice this may have little effect because dio is used in realtime inference. Also, since this repository does not use Whisper, I think you are opening this issue in the wrong repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants