Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning error. #160

Open
rafaepires opened this issue Feb 8, 2024 · 3 comments
Open

Fine-tuning error. #160

rafaepires opened this issue Feb 8, 2024 · 3 comments

Comments

@rafaepires
Copy link

Hey guys,

I'm trying to do finetunning using finGPT.
According to the website https://medium.datadriveninvestor.com/introducing-fingpt-forecaster-the-future-of-robo-advisory-services-50add34e3d3c

it takes 2 steps such as:
1- Prepare Data (prepare_data.ipynb)
2- The Training Process (train_lora.py)

Executing the first step (prepare_data.ipynb) generates two folders, one containing the .csv files and the other containing the .json files, as in the image below

Screenshot from 2024-02-08 16-38-10

The problem is in the second step:
When executing the train_lora.py file I have the following error AttributeError: 'NoneType' object has no attribute 'strip' , as in the image below

Screenshot from 2024-02-08 16-42-37

I don't know what else to do... please, could someone help me?
Many thanks!!

@ynjiun
Copy link

ynjiun commented Feb 13, 2024

this seems the 'answer' is empty. you might want to check prepare data to see if it generates 'answer' at all. did you setup your gpt4 API key correctly? otherwise there is no ground truth answer will be generated.

@Weiyao-Li
Copy link
Member

Seems similar to a null pointer exception problem because the object itself does not exist. You should start from this and look for this "empty" object. It is possible that it is not generated at all.

@Weiyao-Li
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants