Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add maxResultBuffer property #1657

Merged
merged 3 commits into from
Dec 30, 2019

Conversation

adrklos
Copy link
Contributor

@adrklos adrklos commented Dec 20, 2019

Implementation of new property - maxResultBuffer. Enables max bytes read during reading result set.
Enable declare in two styles:

  • as size of bytes (i.e. 100, 150M, 300K, 400G);
  • as percent of max heap memory (i.e. 10p, 15pct, 20percent).
    Also validates if defined size isn't bigger than enabled. Max possible size is 90% of max heap memory.
    If maxResultBuffer property isn't declared, work of driver is not changed.

Commit made for Heimdalldata's request to solve memory problem during reading result set.

All Submissions:

  • Have you followed the guidelines in our Contributing document?
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change? (Cause I don't think there is pull request for this change.)

New Feature Submissions:

  1. Does your submission pass tests?
  2. Does mvn checkstyle:check pass?

Tests added for created PGPropertyMaxResultBufferParser. If necessary, tests for using made property with database can be made, but I would be thankful for information, where that tests could be implemented (cause I still don't know exact structure of this project). And I would be glad for any comments about possibilities to improve my code.

Implementation of new property - maxResultBuffer. Enables max bytes read during reading result set.
Enable declare in two styles:
- as size of bytes (i.e. 100, 150M, 300K, 400G);
- as percent of max heap memory (i.e. 10p, 15pct, 20percent).
Also validates if defined size isn't bigger than enabled. Max possible size is 90% of max heap memory.
If maxResultBuffer property isn't declared, work of driver is not changed.

Commit made for Heimdalldata's request to solve memory problem during reading result set.
Add PGPropertyMaxResultBufferParser test cases.

Commit made for Heimdalldata's request to solve memory problem during reading result set.
@davecramer
Copy link
Member

@adrklos Thanks for this. couple things:

The docs need to be updated as well. This isn't well documented that this even exists but it is here
https://github.com/pgjdbc/pgjdbc/blob/master/docs/documentation/head/connect.md

Secondly; I'm wondering how difficult it would be to actually test failure with a response that is too large ?

@adrklos
Copy link
Contributor Author

adrklos commented Dec 20, 2019

@davecramer If by "too large" you mean larger than set maxResultBuffer, then it should be easy i.e. let's set maxResultBuffer to 300 so it would be 300 bytes and make a table in which we put 30 records with string of length 20. If reading of this table would be at once, driver will have to read like 600 bytes so during reading should return an exception that size of read bytes exceeded maxResultBuffer. On other side if result set gonna be read with fetch size like 10 it should work without problems.

And thanks for link to docs, I will update it.

Update of docs for maxResultBuffer property, and change in comments for javadoc to describe new exceptions.

Commit made for Heimdalldata's request to solve memory problem during reading result set.
@adrklos
Copy link
Contributor Author

adrklos commented Dec 30, 2019

@davecramer Is there anything else you would like to have added/changed/removed in this pull request?

After documentation change, more tests in Travis failed, but it's hard to believe that documentation change caused this (before documentation change, only one test in Travis failed, which was caused by javadoc exception). I suspect that testDuringSendBigTransactionConnectionCloseSlotStatusNotActive and testFastCloses are just problematic tests which aren't always successful (correct me If I think wrong).

@davecramer
Copy link
Member

@adrklos no it's fine. You are correct the test is flaky. I guess I'll put that on my todo list in January. I've restarted the failed tests.

@adrklos
Copy link
Contributor Author

adrklos commented Dec 30, 2019

@davecramer Thanks for answer and restarting failed tests, but still one of them failed at testFastCloses, so if we want to see all of them passed, then would be good to restart this test again (and maybe again if it fails, but I hope not).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants