Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying diffs failing silently #1128

Closed
TheoMcCabe opened this issue Apr 24, 2024 · 7 comments · Fixed by #1138
Closed

Applying diffs failing silently #1128

TheoMcCabe opened this issue Apr 24, 2024 · 7 comments · Fixed by #1138
Assignees
Labels
bug Something isn't working triage Interesting but stale issue. Will be close if inactive for 3 more days after label added.

Comments

@TheoMcCabe
Copy link
Collaborator

TheoMcCabe commented Apr 24, 2024

Expected Behavior

I would expect GPT engineer to either successfully apply all diffs sent by the AI or fail in a way that lets you know which diffs have been applied, which failed, and allows you to manually salvage the failed diff parts by copy and pasting

Current Behavior

The current behaviour seems to be that it applies the sections of the diff which it can and silently throws the rest of the code away. From a users perspective it seems like everything has gone well - but in reality its only applied a portion of the diff.

This is really bad from a usability perspective - for one, a partially applied diff is obviously never going to be working code so applying it is pointless. Also, the knowledge that this is the behaviour pf gpte means i need to manually check every single output to verify its applied the whole diff which is a complete waste of time for diffs which do apply succesfully.

Not applying any of the diffs at all would actually be a better outcome for me, as at least i would have a consistent workflow of copy and pasting... however a more sensible sollution is applying the diffs it can, and if it cant apply a diff for a file, not apply any change to it at all, and instead providing an error output which is convenient for the use to copy and paste manually into the file

Failure Logs

I cant upload failure logs as the code im working on is sensitive

@TheoMcCabe TheoMcCabe added bug Something isn't working triage Interesting but stale issue. Will be close if inactive for 3 more days after label added. labels Apr 24, 2024
@similato87
Copy link
Collaborator

Hello @TheoMcCabe ,

Thank you for bringing up this issue regarding the diff application process. I apologize for the inconvenience you've experienced in your project.

You are correct in your understanding of our current strategy for applying diffs:

  1. Validation and Correction: We first validate and correct the diffs based on format. If a diff fails validation, we attempt an automated self-heal using our LLMs.
  2. Discard Unrecoverable Diffs: If the self-heal process cannot handle the error, we discard these diffs.
  3. Apply Valid Diffs: All corrected diffs are then applied.

Outputs indicating which diffs have been discarded are available in the console and logs. This mechanism is designed to provide a smooth experience for users at all levels and allows for multiple attempts to gradually refine a complex code base.

Additionally, we provide users the option to review and manually decide on applying diffs. You can see the planned changes and make an informed decision at this stage of the process:
View Code Here

Would you suggest an interactive approach for applying diffs? For example, showing each validated and corrected diff and allowing users to choose whether to apply them sequentially by user input?

@TheoMcCabe
Copy link
Collaborator Author

TheoMcCabe commented Apr 26, 2024

interactively applying diffs im not bothered about - what i find very difficult from a user perspective is that it is very hard for me to know when diffs have failed and havent been applied - you say that ' Outputs indicating which diffs have been discarded are available in the console and logs. ' - i disagree - it is very unclear which diffs have not been applied and this is the problem

When some of my diffs arent applied- the output in the console makes it look like everything has worked fine - this is the bit that needs improving.

My recommendation is that the last thing sent to the user in the console needs to be the diffs which were not sucessfully applied. These need to be outputted into the console in a really easy to read, and copy and paste format... it should use colouring and wording to clearly show that these diffs were not able to be applied and so should be intepreted manually by the user

@Emasoft
Copy link

Emasoft commented Apr 28, 2024

Do the same diffs work if applied with python-unidiff? ( https://github.com/matiasb/python-unidiff/ )?

Not applying any of the diffs at all would actually be a better outcome for me, as at least i would have a consistent workflow of copy and pasting...

Regarding the use of copy/paste instead of diffs: not having to use diffs would be ideal, but there are limitations of the AI models that make very difficult to get as output the full original code with just the changes made by the AI.

If you want to understand those limitations, I suggest those 2 excellent articles by the Sweep devs:

https://github.com/sweepai/sweep/blob/main/docs/pages/blogs/gpt-4-modification.mdx

and the follow-up:

https://github.com/sweepai/sweep/blob/main/docs/pages/blogs/refactor-python.mdx

@ATheorell
Copy link
Collaborator

Hi,

yes they should work with python-unidiff, IF the AI makes them correct enough to be corrected into exact unified diffs. The general problem is not applying diffs, but that the AI sometimes delivers low quality diffs.

@TheoMcCabe
Copy link
Collaborator Author

https://github.com/sweepai/sweep/blob/main/docs/pages/blogs/refactor-python.mdx

thanks @Emasoft I'm aware of these sweep articles but good to read them again. You seem to have misunderstood the issue i'm raising here, apologies if I wasn't clear enough.

This issue relates to the behaviour of gpte cli when the AI generated unified diffs are not valid and cannot be applied. Specifically i think there is an issue in how this failure is surfaced to the users.

Im not suggesting we rewrite code files from scratch on every run or change our approach to diffing.

@similato87
Copy link
Collaborator

similato87 commented May 2, 2024

Hi @TheoMcCabe, sorry for the delayed response—I just returned from my trip today. Axel sent me the files, and I've pinpointed the issue. The problem lies with the diff validation; it failed to correct this Docker hunk, and initially, the problematic hunk wasn’t printed in the console from start to finish.

I will create a PR to: 1) address this failure, and 2) ensure the invalid hunk is printed to both the console and the debug file.

Thanks for highlighting this issue, and I apologize for my oversight regarding the hunk output.

@TheoMcCabe
Copy link
Collaborator Author

awesome thanks @similato87

@similato87 similato87 linked a pull request May 2, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Interesting but stale issue. Will be close if inactive for 3 more days after label added.
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

4 participants