Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid double allocation when passing strings via IntoParam #1713

Merged
merged 10 commits into from
Apr 28, 2022
Merged

Avoid double allocation when passing strings via IntoParam #1713

merged 10 commits into from
Apr 28, 2022

Conversation

AronParker
Copy link
Contributor

Introduces an iterator based approach as discussed in #1712. The current behavior when passing a &str to a function using the IntoParam trait works as follows:

  1. Allocate a vec: Vec<u16> and transcode the UTF-8 from s: &str into the vec with UTF-16.
  2. Allocate a buffer with vec.len + 1
  3. Copy the contents of vec into the newly allocated buffer
  4. Append the null terminator (\0)

This pull request changes the behaviour as follows:

  1. Allocate a buffer with s.len() + 1 from s: &str
  2. Transcode the UTF-8 of s into the newly allocated buffer using UTF-16.
  3. Append the null terminator (\0)

This saves one copy and at least one allocation (likely 3-4 on average). Collecting into a Vec<u16> as done currently using OsStr::encode_wide and Str::encode_utf16 mostly allocate 2-3 times as described here: rust-lang/rust#96297 (comment)

Fixes #1712

@ghost
Copy link

ghost commented Apr 24, 2022

CLA assistant check
All CLA requirements met.

@AronParker AronParker changed the title Issue 1712 fix Avoid double allocation when passing strings via IntoParam Apr 24, 2022
@riverar
Copy link
Collaborator

riverar commented Apr 24, 2022

Might be worth adding a NUL-in-the-middle test ("te\0st\0") to string_param\tests\pwstr.rs to cover that prickly scenario. (Your changes pass that test.)

@riverar riverar requested a review from kennykerr April 24, 2022 16:50
@@ -45,7 +45,7 @@ unsafe impl Abi for PCWSTR {
#[cfg(feature = "alloc")]
impl<'a> IntoParam<'a, PCWSTR> for &str {
fn into_param(self) -> Param<'a, PCWSTR> {
Param::Boxed(PCWSTR(heap_string(&self.encode_utf16().collect::<alloc::vec::Vec<u16>>())))
Param::Boxed(PCWSTR(unsafe { string_from_iter(self.encode_utf16(), self.len()) }))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will self.len() >= self.encode_utf16().count() always hold true? Intuitively it seems like it should, a 3-byte UTF-8 sequence representing two code points could be represented by either two u16 if they combine, or three u16 if they map 1:1. Worst case is that it over-allocates.

Edit: Based on everything I've been able to find, the worst case is over-allocation.

Copy link
Contributor

@tim-weis tim-weis Apr 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd been banging my head at this for a while now, too. It seems to hold true, so long as encode_utf16() doesn't do anything crazy, like decomposing code points.

The rationale I came up with is this: Each Unicode code point can be encoded as no more than two UTF-16 code units. For the inequality to hold, we can thus ignore all UTF-8 sequences that consist of two or more code units. That leaves the set of code points that require a single UTF-8 code unit. Those are the 128 ASCII characters, which are all encoded using a single code unit in UTF-16 as well.

In summary:

  • UTF-8 sequences of length 1 map to UTF-16 sequences of length 1
  • UTF-8 sequences of length 2 (or more) map to UTF-16 sequences of length 2 (or less)

While I couldn't find any formal documentation on what encode_utf16() does and doesn't do, the example listed under encode_utf16() at least seems to confirm the reasoning above: assert!(utf16_len <= utf8_len);.

Still a bit uneasy about all of this. Personally, I'd prefer a string_from_iter() that takes an iterator only, wastes some cycles on count() (ultimately doing the de-/encoding twice), and drops the unsafe along the way.

Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it will given three assumptions:

  1. the encoding of a Unicode code point in UTF-8 characters will always be higher or equal to the number of UTF-16 characters. UTF-8 is a variable-width character encoding that takes four possible forms:

1 UTF-8 Character => 1 UTF-16 Character
2 UTF-8 Characters => 1 UTF-16 Character
3 UTF-8 Characters => 1 UTF-16 Character
4 UTF-8 Characters => 2 UTF-16 Characters

  1. No (de)normalization takes place / no decomposing or composing character transformations (e.g. A + ¨: LATIN CAPITAL LETTER A (U+0041) + COMBINING DIAERESIS (U+0308)). There is no indication in the documentation of the Rust standard library that any of such normal or transformations takes place and looking at the source code only validates that assumption.

  2. Input is valid UTF-8. We might have to look into edge-cases around OsStr, since it consists of WTF-8 strings. Even WTF-8 strings uphold this invariant, because unpaired surrogate 16-bit code units would still shrink to 1 UTF-16 Character (2 UTF-16 characters for supplementary code points, which requires 4 UTF-8 characters).

I strongly believe that it should not be the case, but if there is concern we could also safeguard it either by truncating the maximum number of UTF-16 characters or panicing if that were to occur.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I'd prefer a string_from_iter() that takes an iterator only, wastes some cycles on count() (ultimately doing the de-/encoding twice), and drops the unsafe along the way.

Is there a guarantee that an iterator can be iterated twice?

I doubt there's any de/normalization taking place in encode_utf16() I'd be willing to bet dozens of dollars on it! Survey says... There's no denormalization, just bit twiddling as expected: https://doc.rust-lang.org/stable/src/core/char/methods.rs.html#1706

Also, the Rust documentation for encode_utf16() seems to indicate that the utf-16 u16 count is strictly less than the utf-8 byte count:
https://doc.rust-lang.org/stable/src/core/str/mod.rs.html#998

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the Rust documentation for encode_utf16() seems to indicate that the utf-16 u16 count is strictly less than the utf-8 byte count:
https://doc.rust-lang.org/stable/src/core/str/mod.rs.html#998

Nice find. Even though it's just an example I'd say it was intended as a guarantee.

The way I see it there are three ways in which we can resolve this:

  1. For each core::ptr::write we do, we assert that we are within bounds first (panic behaviour)
  2. We run .take(len + 1) on the iterator, which truncates any theoretical superfluous characters.
  3. We rethink our approach with iterators.

To be honest I'm not very happy about this function to begin with. I designed it the way I did in order to remain "backwards compatible" to heap_string. We are currently generic over T with an iterator we can only iterate once.

Why do we need to be generic over T? The only strings that the Windows API uses are UTF-8 and UTF-16 and some day there might be a use case for UTF-32. These have an alignment of 1, 2 and 4 respectively. As we are generic currently, we assert that the alignment of the output buffer matches the alignment of T. However, given that HeapAlloc has an alignment of 8+ and that we only ever use 4 as maximum alignment, that check is fairly redundant.

As it stands, this API has the following inputs: &OsStr and &str. And the following outputs: *const u8 and *const u16. We could create two functions instead: heap_string_narrow, heap_string_wide for *const u8 and *const u16, respectively. This would allow us to iterate over the string any number of times and ensure they are within bounds.

As I was writing this I actually have an even better idea, hold on let me implement it.

@AronParker
Copy link
Contributor Author

Might be worth adding a NUL-in-the-middle test ("te\0st\0") to string_param\tests\pwstr.rs to cover that prickly scenario. (Your changes pass that test.)

Good idea! Probably good for another pull request though, we should try to solve / validate this implementation first.

@AronParker
Copy link
Contributor Author

AronParker commented Apr 24, 2022

I just pushed a new version that has the following advantages:

  1. It only iterators once and does not require precomputing the length in advance.
  2. it is memory safe and does not write out of bounds.
  3. It panics should any of our assumptions fail.

There are two more issues to be solved:

  • Get rid of that redundant alignment check that won't be necessary (probably by splitting the functions)
  • Ensure that all iterators we take are fused (meaning, once it returns None, it will forever return None).

My gut instinct tells me that 2. is 99.99% true, but we have to prove it or safeguard it, otherwise the assertion that the iterator has been fully used (A.K.A returns None) might fire unexpectedly. Unsurprisingly, all three iterators are fused:

For Copied: implements FusedIterator
For EncodeUtf16: implements FusedIterator
For EncodeWide: Is fused but doesn't implement FusedIterator (I believe this is an error)

}

// TODO ensure `encoder` is a fused iterator
assert!(encoder.next().is_none(), "encoder returned more characters than expected");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this be fused because the chaining of once() is guaranteed to be fused? iter might not be fused, but the implementation will never call next() on it again because the chain moved on to the next one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, I didn't consider that. Once is indeed guaranteed to be fused, so that issue is solved!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asserting here seems wrong... I'm not sure we want to panic if the encoder contains more than len. If anything, a debug_assert! might be appropriate (though I think silently ignoring is likely ok).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also like to avoid panics in general. I have customers for whom this is inappropriate. If it means we have to embed this inside PCWSTR and PCSTR directly to avoid the issues with generality, then so be it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this panic: string_from_iter("hello world".encode_utf16(), 5)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should never happen. My concern is that we're trying to harden a function that is only ever used internally so if there's a general concern about the safety of this function then we can either mark it unsafe or just get rid of the function entirely.

@@ -45,7 +45,7 @@ unsafe impl Abi for PCWSTR {
#[cfg(feature = "alloc")]
impl<'a> IntoParam<'a, PCWSTR> for &str {
fn into_param(self) -> Param<'a, PCWSTR> {
Param::Boxed(PCWSTR(heap_string(&self.encode_utf16().collect::<alloc::vec::Vec<u16>>())))
Param::Boxed(PCWSTR(unsafe { string_from_iter(self.encode_utf16(), self.len()) }))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if you just embrace the fact that you know self is a &str and pass in the actual length instead of over-allocating? I don't know how expensive an over-allocation is relative to the recalculation.

string_from_iter(self.encode_utf16(), self.encode_utf16().count())

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would need to iterate over it twice, which provides some memory savings at the cost of run-time computation.

The amount of memory we waste is 0% if the string consists of ASCII only.
The amount of memory we waste is 50% if the string consists of BMP code points (that are not ASCII) only.
The amount of memory we waste is 50% if the string consists of supplementary code points only (requires surrogate pairs for UTF-16 (2 chars) and 4 byte-sequences for UTF-8 (4 chars))

Given that most strings are ASCII or contain mostly ASCII, I believe it is a reasonable choice. When considering memory waste we have to look at the intention the IntoParam trait provides: It is a convenience API.

In heavily resource constrained Windows environment such as Nano server, with a minimum requirement of 512 MB, I'd believe heap allocation is frowned upon as is case in most embedded environments. I don't think this convenience API is something that is intended for Windows Nano environments and even if, given small strings the waste is still comparatively low.

Given the 2 GB memory requirement of Windows, the only way in which string allocation would be worry-some would be strings with length of 512 MB. Aside from the fact that it would be ludicrous to keep such huge strings in memory and not maintain/convert for yourself for performance reasons, these strings are also only allocated for the duration of the function call, they are immediately freed after.

I'd say given the use case, only a very low amount of PCs having memory with less than 4 GB and that most strings aren't occupying some ludicrous size, I'd say that over-allocating here is a reasonable choice to make. The mostly practical application of supplementary code points or BMP code-points is mostly foreign languages that due to their amount of characters convey more entropy in their texts anyway and are hence smaller.

That being said, allocation the optimal length is still a desirable goal. Additionally, reading from the memory when decoding the unicode code points will likely enter the processor cache, which should make the the second iteration faster. I'll rewrite something later that takes advantage of this.

@AronParker
Copy link
Contributor Author

AronParker commented Apr 25, 2022

I did some benchmarking comparing the over_iterating vs over_allocation:
lines

over_allocate: uses the current approach
over_iterate: iterates encode_utf16 twice to get a precise memory requirement at the expense of having to iterate twice.

In my personal opinion reducing memory usage is not worth it here because

  1. most strings are small
  2. strings created by this trait are ephemeral
  3. It is a convenience function
  4. memory is cheap
  5. Windows, in general, is not "lean" (not recommended for embedded)

@kennykerr
Copy link
Collaborator

Thanks @AronParker - agree with your analysis. It doesn't seem worthwhile in this case.

@ryancerium
Copy link
Contributor

Great work @AronParker

@AronParker
Copy link
Contributor Author

Thanks guys! Thanks to @kennykerr for the iterator idea and once again thanks @ryancerium for your extensive and helpful reviews, it made the code a lot better and robust in the end!

Comment on lines 57 to 67
for i in 0..len {
// SAFETY: ptr points to an allocation object of size `len`, indices accessed are always lower than `len`
unsafe {
core::ptr::write(
ptr.add(i),
match encoder.next() {
Some(encoded) => encoded,
None => break,
},
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this a bit clearer perhaps:

for (offset, c) in (0..len).zip(encoder) {
  // SAFETY: ptr points to an allocation object of size `len`, indices accessed are always lower than `len`
  unsafe { core::ptr::write(ptr.add(offset), c); }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks much better! Unfortunately putting the encoder into .zip(self) would consume it, which wouldn't allow our assertion afterwards:

assert!(encoder.next().is_none(), "encoder returned more characters than expected");

Your version does look much better and I'd take it in a heartbeat, but we'd need a non-consuming version of zip unfortunately.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be unusual, but is it invalid to request encoding fewer characters than there are?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically in my opinion it is impossible, but since there was uncertainty in the reviews about silent truncation I kept it in. It could be possible that somehow an invalid length gets passed in the future, so it'd be helpful to be aware via a panic rather than silent truncation. I'm not hugely opinionated on this issue, we can go for the more elegant .zip at the expense of potential bugs in the encoder or what gets passed to the function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't worried about truncation, I was worried about writing past the allocated memory with the unsafe ptr.write(). Your outer loop and/or the .zip() makes that a non-issue. No strong opinion on this either, I can see it both ways. We're not going to overwrite anything no matter what.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can store the zipped iterator into a local variable and still do the check afterwards if you want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Zip iterator instance will return None because the range iterator is finished. You can never get to the original encoder iterator again because it's inside the Zip iterator instance.

@kennykerr
Copy link
Collaborator

FYI @Lokathor / @rylev

kennykerr
kennykerr previously approved these changes Apr 26, 2022
@kennykerr
Copy link
Collaborator

Looks like something may have regressed in the nightly Rust compiler that has broken both open PRs (#1713 & #1717). This PR succeeded in building yesterday.

Unfortunately, these tests are only supported on nightly at the moment so we're stuck. 😬

@kennykerr kennykerr mentioned this pull request Apr 26, 2022
@ChrisDenton
Copy link
Collaborator

Possibly rust-lang/rust#95604

There have been regressions regarding stdcall symbol name mangling.

@kennykerr
Copy link
Collaborator

Possibly rust-lang/rust#95604

There have been regressions regarding stdcall symbol name mangling.

Thanks Chris! Hey @rylev I noticed you commented on that - could this be related?

@kennykerr
Copy link
Collaborator

Here's the fix: rust-lang/rust#96444

@kennykerr
Copy link
Collaborator

@AronParker there's a temporary fix for the build until nightly is fixed if you'd like to merge master.

@AronParker
Copy link
Contributor Author

@AronParker there's a temporary fix for the build until nightly is fixed if you'd like to merge master.

Alright seems good! Yeah I'm quite happy with the way this is right now. We could apply ryancerium's .zip simplification to make the code more readable, but I believe asserting that the encoder has no more bytes to give makes me sleep a little better at night, especially given that in case the length is too small it'd produce non-null terminated strings. I'm quite happy the way it is right now, it's efficient and robust, and given its such a core piece should make a lot of cases more efficient. I'd say LGTM and ready to merge with master.

@ryancerium
Copy link
Contributor

We could guarantee null-termination, but it would still potentially leave characters in the iterator. Honestly, if this is an internal-only function, I'd opt for readable because the crate owns all 3 places it gets called.

// Allocate space for a terminating null
let alloc_len = len + 1;
let ptr = heap_alloc(alloc_len * std::mem::size_of::<T>()).expect("could not allocate string") as *mut T;

let mut encoder = iter.take(len)         // read up to len chars from the iterator
  .chain(core::iter::once(T::default())  // append the terminating null
  .enumerate();                          // memory offset of the char
for (offset, c) in encoder {
  unsafe { core::ptr::write(ptr.add(offset, c); }
}

@AronParker
Copy link
Contributor Author

We could guarantee null-termination, but it would still potentially leave characters in the iterator. Honestly, if this is an internal-only function, I'd opt for readable because the crate owns all 3 places it gets called.

In a way we guarantee null-termination already, if the encoder does not fully write the transcoded result including the null-terminator, it panics. Which is pretty much the nuclear option that is impossible to sweep under the rug. I'm open to changing it, but I'd still say that silent truncation is worse than a hard-error. Given under normal circumstances and people using the function correctly, it should never have any issues. But in the case issues arise (e.g. someone passed the wrong len etc.), I believe it is more advantageous to panic rather than silently truncate, because it makes the error most noticeable.

But if others share this preference I'm open to changing it of course.

PeterWrighten
PeterWrighten previously approved these changes Apr 27, 2022
@tim-weis
Copy link
Contributor

I'd opt for readable because the crate owns all 3 places it gets called.

string_from_iter is pub. Even if it were pub(crate) the call sites are still based on the assumption that utf16_len <= utf8_len, for all implementations of encode_wide() and encode_utf16(). Whether this assumption holds is established elsewhere, and assert!-ing that it did seems plausible.

@AronParker
Copy link
Contributor Author

string_from_iter is pub. Even if it were pub(crate) the call sites are still based on the assumption that utf16_len <= utf8_len, for all implementations of encode_wide() and encode_utf16().

string_from_iter is pub, but it is still effectively private because the heap module is private:

Whether this assumption holds is established elsewhere, and assert!-ing that it did seems plausible.

I agree, I think it's good to have that assertion for that assumption there.

// SAFETY: ptr points to an allocation object of size `len`, indices accessed are always lower than `len`
unsafe {
core::ptr::write(
ptr.add(i),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can you use ptr.add(i).write(...) instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course! My bad.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can store the zipped iterator into a local variable and still do the check afterwards if you want.

Pardon, but how do I do that? The zip iterator consumes both inputs and only yields elements if both iterators have an element, what can I get from the zipped iterator besides None afterwards?

Comment on lines 57 to 67
for i in 0..len {
// SAFETY: ptr points to an allocation object of size `len`, indices accessed are always lower than `len`
unsafe {
core::ptr::write(
ptr.add(i),
match encoder.next() {
Some(encoded) => encoded,
None => break,
},
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can store the zipped iterator into a local variable and still do the check afterwards if you want.

}

// TODO ensure `encoder` is a fused iterator
assert!(encoder.next().is_none(), "encoder returned more characters than expected");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asserting here seems wrong... I'm not sure we want to panic if the encoder contains more than len. If anything, a debug_assert! might be appropriate (though I think silently ignoring is likely ok).

ptr.add(i),
match encoder.next() {
Some(encoded) => encoded,
None => break,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be useful to do the following here instead of the assert in code a few lines later:

debug_assert!(i == len -1);

Essentially, while this code is always safe (i.e., we'll never try to write to unallocated memory), if the iterator's length and len don't match we either end up allocating too little memory or too much. It seems reasonable to help user's of this function not make that mistake.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean debug_assert!(i < len) and not ==?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant ==, because I believe we expect that length will always be equal to the number of elements in the iterator.

buffer
/// This function panics if the heap allocation fails, the alignment requirements of 'T' surpass
/// 8 (HeapAlloc's alignment) or if len is less than the number of items in the iterator.
pub fn string_from_iter<I, T>(iter: I, len: usize) -> *const T
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: it always bothered me a bit that we're using the term string here when this function is more general than that. It might be nice to name it in a way that describes more closely what's actually happening.

In fact, it might make sense for this to only copy an iterator and the caller is responsible for adding the trailing null byte. This function would then lose the Default bound and the caller would call it like so:

copy_from_iterator(self.as_bytes().iter().copied().chain(core::iter::once(0)), self.len() + 1);

The caller is a bit more verbose, but it's way clearer what's actually happening.

@AronParker
Copy link
Contributor Author

I just pushed a commit that contains the suggestions of ryancerium and rylev. I've renamed the function to alloc_from_iter, which implies in its naming that it must be freed. At this time there is no check to see whether the iterator has been emptied, but given it takes a len parameter anyway this might be desirable behavior too.

ryancerium
ryancerium previously approved these changes Apr 27, 2022
Copy link
Contributor

@ryancerium ryancerium left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a work of art. I applaud your willingness to change tactics after excellent discussion. I'm approving just for the heck of it!

@@ -32,20 +32,32 @@ pub unsafe fn heap_free(ptr: RawPtr) {
}
}

/// Copy a slice of `T` into a freshly allocated buffer with an additional default `T` at the end.
/// Copy len elements of an iterator of type `T` into a freshly allocated buffer with an additional default `T` at the end.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No more default T at the end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah nice catch! Gonna fix it immediately.

@kennykerr kennykerr self-requested a review April 27, 2022 17:29
@AronParker
Copy link
Contributor Author

This is a work of art. I applaud your willingness to change tactics after excellent discussion. I'm approving just for the heck of it!

Thank you very much for your positivity and continuous valuable input. It's been a pleasure to work with you!

@kennykerr kennykerr dismissed stale reviews from ryancerium, PeterWrighten, and themself via 0ad384c April 27, 2022 21:17
Copy link
Collaborator

@kennykerr kennykerr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks folks! If there's no more feedback, I'm happy to merge this change.

Copy link
Contributor

@rylev rylev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like how this turned out! Thanks so much for your patience and contribution here 🧡

@rylev rylev merged commit e518d03 into microsoft:master Apr 28, 2022
@AronParker
Copy link
Contributor Author

Same here, I love how clean this is! Thanks everyone again for your valuable feedback!

@AronParker AronParker deleted the issue-1712-fix branch April 28, 2022 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

IntoParam<PCWSTR> for &str unnecessarily allocates twice
8 participants