Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert breaking changes and add new GenV* interfaces #68

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ktnyt
Copy link

@ktnyt ktnyt commented Jan 5, 2018

Based on the discussion I saw from issues #66 and #67 I've monkey patched the code for what could be a possible reference point for the future interface of the library. TBH it is a messy and minimal patch and there must be more discussion on how these API changes should roll out. In the meantime this patch should provide the interfaces necessary to keep old code alive while promoting the use of the new interface.

Changes made in the commit:

  • Rename current global namespace NewV* methods to GenV*.
  • Add novel NewV* functions all with return signatures of UUID which loops until no error is returned.*
  • Patch tests to comply with the changes above.

*This is just an idea that was presented in #66 and I personally thought might be good because the issue addressed in #18 seemed to be a non-deterministic behavior (which I could very well be misunderstanding). If errors are persistent across trials (and after reading the test code I've been getting an impression that they might be so) panicking might be the only option (although I cannot tell whether or not the global instance can potentially raise errors caught in the test cases).

Things to discuss:

  • Is there a better solution other than having to revert the API changes?
  • If this way of implementing is feasible, should the methods defined on the Generator interfaces be renamed? (Maybe to GenV*? That may sound redundant so maybe just plain V*?)
  • Naming conventions in general.
  • How should errors that occurred during reliable UUID generation be handled? (or can it be handled at all?)

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.6%) to 95.894% when pulling 0235389 on ktnyt:master into 36e9d2e on satori:master.

1 similar comment
@coveralls
Copy link

coveralls commented Jan 5, 2018

Coverage Status

Coverage decreased (-1.6%) to 95.894% when pulling 0235389 on ktnyt:master into 36e9d2e on satori:master.

@packrat386
Copy link

How should errors that occurred during reliable UUID generation be handled? (or can it be handled at all?)

I feel like if the goal is to preserve the old API, then these functions should probably preserve the old behaviour, which is to panic. Obviously panic is a pretty dangerous way to deal with this, and that's why the API is changing. However, if you want to alleviate the pain from the unexpected switch as much as possible, this should probably just do exactly what the old functions did because it's possible people were relying on that particular implementation).

Just my two cents.

@cstockton
Copy link

cstockton commented Jan 5, 2018

I believe this should be reverted as soon as possible, to do that we should probably only rollback the prior commits to remove any points of contention around the API. Once things are working again I believe that no changes should be made to the API, from a security perspective a panic is the best thing to do.

My rational based on my understanding and prior research, though someone may feel free to correct me:

  • The Go crypto library already contains exactly the safe amount of leniency for obtaining entropy from the system. There are very few failure conditions across supported platforms and all platforms block until they have read the requested amount of entropy. On Linux for example kernels with POSIX getrandom support do not set GRND_NONBLOCK, even if they did they would just end up with a errno of EAGAIN. All EAGAIN errors are automatically retried and io.ReadFull ensures the entire buffer is always read if no errors occur.
  • If for some reason getrandom could not obtain sufficient entropy it should return a error such as EFAULT or something similar. In addition this may only occur at startup before the system has performed the initial forward-only state transition of crng readiness. If by the time a system has booted and safe entropy is not available the system is in a dire state not suitable for use.

That said once a successful read occurs from crypto.Random, I do not believe a future call may fail. When a call does fail it is very likely to fail for future calls as I imagine if well past boot the entropy pool has not been initialized sufficiently to seed the urandom pool something is very wrong. So I think looping forever in the former case adds no resiliency, and is likely a busy loop in the latter so the panic is more correct here.

@ktnyt
Copy link
Author

ktnyt commented Jan 6, 2018

Thanks @packrat386 and @cstockton for providing me with context on the matter. I guess it should just be left to panic then.

@coveralls
Copy link

coveralls commented Jan 6, 2018

Coverage Status

Coverage decreased (-1.6%) to 95.894% when pulling 67dd345 on ktnyt:master into 36e9d2e on satori:master.

@cstockton cstockton mentioned this pull request Jan 8, 2018
@sambengtson
Copy link

What's the status on this? Are we not getting the author's attention here?

@packrat386
Copy link

The author hasn't had any activity since the change 🤷‍♀️

kevinburke added a commit to kevinburke/go.uuid that referenced this pull request Jan 25, 2018
This new function can support the desired feature (return error
instead of panic) without having to break compatibility for existing
clients.

Fixes satori#66.
Fixes satori#67.
Fixes satori#68.
Fixes satori#69.
Fixes satori#70.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants