Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(train): support ipu #264

Draft
wants to merge 22 commits into
base: main
Choose a base branch
from
Draft

feat(train): support ipu #264

wants to merge 22 commits into from

Conversation

34j
Copy link
Collaborator

@34j 34j commented Apr 9, 2023

No description provided.

@codecov-commenter
Copy link

codecov-commenter commented Apr 9, 2023

Codecov Report

Merging #264 (d343cb1) into main (427712f) will increase coverage by 0.13%.
The diff coverage is 17.28%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

@@            Coverage Diff             @@
##             main     #264      +/-   ##
==========================================
+ Coverage   20.45%   20.59%   +0.13%     
==========================================
  Files          38       38              
  Lines        3222     3210      -12     
  Branches      414      418       +4     
==========================================
+ Hits          659      661       +2     
+ Misses       2546     2532      -14     
  Partials       17       17              
Impacted Files Coverage Δ
src/so_vits_svc_fork/f0.py 20.51% <0.00%> (+0.67%) ⬆️
src/so_vits_svc_fork/train.py 20.29% <3.12%> (-3.27%) ⬇️
src/so_vits_svc_fork/modules/commons.py 20.00% <17.85%> (+0.46%) ⬆️
src/so_vits_svc_fork/modules/synthesizers.py 17.92% <25.00%> (-0.45%) ⬇️
src/so_vits_svc_fork/utils.py 21.52% <33.33%> (+1.16%) ⬆️
src/so_vits_svc_fork/modules/mel_processing.py 27.27% <100.00%> (+7.53%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@34j
Copy link
Collaborator Author

34j commented Apr 9, 2023

File "/usr/local/lib/python3.8/dist-packages/lightning/pytorch/loops/optimization/manual.py", line 109, in advance                                                    
                        training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())                                                                         
                      File "/usr/local/lib/python3.8/dist-packages/lightning/pytorch/trainer/call.py", line 288, in _call_strategy_hook                                                     
                        output = fn(*args, **kwargs)                                                                                                                                        
                      File "/usr/local/lib/python3.8/dist-packages/lightning/pytorch/strategies/ipu.py", line 268, in training_step                                                         
                        return self._step(RunningStage.TRAINING, *args, **kwargs)                                                                                                           
                      File "/usr/local/lib/python3.8/dist-packages/lightning/pytorch/strategies/ipu.py", line 264, in _step                                                                 
                        return poptorch_model(*args, **kwargs)                                                                                                                              
                      File "/usr/local/lib/python3.8/dist-packages/poptorch/_poplar_executor.py", line 1151, in __call__                                                                    
                        self._compile(in_tensors)                                                                                                                                           
                      File "/usr/local/lib/python3.8/dist-packages/poptorch/_impl.py", line 358, in wrapper                                                                                 
                        return func(self, *args, **kwargs)                                                                                                                                  
                      File "/usr/local/lib/python3.8/dist-packages/poptorch/_poplar_executor.py", line 911, in _compile                                                                     
                        self._executable = self._compileWithDispatch(in_tensors_trace_view)                                                                                                 
                      File "/usr/local/lib/python3.8/dist-packages/poptorch/_impl.py", line 164, in wrapper                                                                                 
                        return func(*args, **kwargs)                                                                                                                                        
                      File "/usr/local/lib/python3.8/dist-packages/poptorch/_poplar_executor.py", line 787, in _compileWithDispatch                                                         
                        result = self._model(*args, **kwargs)                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl                                                                    
                        return forward_call(*input, **kwargs)                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/lightning/pytorch/overrides/base.py", line 90, in forward                                                                
                        output = self._forward_module.training_step(*inputs, **kwargs)                                                                                                      
                      File "/usr/local/lib/python3.8/dist-packages/so_vits_svc_fork/train.py", line 338, in training_step                                                                   
                        y_hat_mel = mel_spectrogram_torch(y_hat.squeeze(1), self.hparams)                                                                                                   
                      File "/usr/local/lib/python3.8/dist-packages/so_vits_svc_fork/modules/mel_processing.py", line 34, in mel_spectrogram_torch                                           
                        return torchaudio.transforms.MelSpectrogram(                                                                                                                        
                      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl                                                                    
                        return forward_call(*input, **kwargs)                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/torchaudio/transforms/_transforms.py", line 642, in forward                                                              
                        specgram = self.spectrogram(waveform)                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl                                                                    
                        return forward_call(*input, **kwargs)                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/torchaudio/transforms/_transforms.py", line 108, in forward                                                              
                        return F.spectrogram(                                                                                                                                               
                      File "/usr/local/lib/python3.8/dist-packages/torchaudio/functional/functional.py", line 121, in spectrogram                                                           
                        spec_f = torch.stft(                                                                                                                                                
                      File "/usr/local/lib/python3.8/dist-packages/torch/functional.py", line 632, in stft                                                                                  
                        return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore[attr-defined]                                                                         
                    poptorch.poptorch_core.Error: In poptorch/source/dispatch_tracer/Tensor.cpp:68: 'poptorch_cpp_error': Unsupported tensor input type from pytorch:                       
                    ComplexFloat

As there is no way to pass tensors to cpu in poptorch, IPU support seems impossible🤬

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants