Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework locking in DeserializerCache #4456

Merged
merged 10 commits into from
Apr 7, 2024
Merged

Conversation

pjfanning
Copy link
Member

I'm a little concerned that ReentrantLock is a little bit slower than synchronized - what I have done is moved some optimistic logic to before we attempt to get the lock - in an effort to possibly avoid even needing the lock in some cases where the value is cached already.

@pjfanning pjfanning changed the title remove synchronized block DeserializationCache: remove synchronized block Mar 28, 2024
@pjfanning pjfanning changed the base branch from 2.18 to 2.17 March 28, 2024 18:28
@cowtowncoder
Copy link
Member

cowtowncoder commented Mar 28, 2024

My concern is that incomplete deserializers handling is actually not just performance optimization, but actual correctness challenge: this because the handling of deserialization resolution to support cyclic dependencies. So creating more than one deserializer for type, during construction, can lead to linkage across incomplete deserializer instances.
Performance aspects are of less concern to me than correctness in this case.

Having said that, since this does address parts of #4430 we can proceed with this.

But I basically do not think we can necessarily start removing most of synchronization aspects due to intricate nature of cyclic-dependency resolution (would be great if we could reliably test correctness, and make locking more obvious and explicit -- but we are not there at this point).

@pjfanning
Copy link
Member Author

My concern is that incomplete deserializers handling is actually not just performance optimization, but actual correctness challenge: this because the handling of deserialization resolution to support cyclic dependencies. So creating more than one deserializer for type, during construction, can lead to linkage across incomplete deserializer instances. Performance aspects are of less concern to me than correctness in this case.

I don't think the change will lead to extra creations. It just follows this common pattern.

  • optimistic get - if get returns value, return it
  • no cached value - lock and then check cache again just in case the previous lock owner was looking for the same thing and then created it for us
  • still nothing, then create it and cache it

This only really differs from the previous code due to the optimistic get.

@pjfanning
Copy link
Member Author

@oddbjornkvalsund any interest in trying this out? It should be more performant than your version because it doesn't always lock. As @cowtowncoder highlights, the code is complicated enough that this approach might be too aggressive though.

@cowtowncoder
Copy link
Member

I think we should do #4430 for 2.17, and then possible follow-up changes like these in 2.18. This to reduce risk.

@pjfanning pjfanning changed the base branch from 2.17 to 2.18 March 29, 2024 20:50
@pjfanning pjfanning changed the title DeserializationCache: remove synchronized block DeserializationCache: rework lock Mar 29, 2024
@pjfanning pjfanning marked this pull request as draft March 29, 2024 20:50
@cowtowncoder
Copy link
Member

@pjfanning Is this ready for review?

@pjfanning pjfanning marked this pull request as ready for review April 6, 2024 08:19
@pjfanning
Copy link
Member Author

@pjfanning Is this ready for review?

@cowtowncoder this is ready to review

@cowtowncoder cowtowncoder changed the title DeserializationCache: rework lock Rework locking in DeserializerCache Apr 7, 2024
@cowtowncoder cowtowncoder merged commit 7c4bd23 into FasterXML:2.18 Apr 7, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants