-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document BitDepth::Sixteen encoding #203
Comments
I'm not even convinced it is actually able to to write 16bit rgb images? So I don't think it's just about documenting this? |
Well, I switched to But in general, adding some documentation about the meaning of If only to prevent users like me, who need to write scene-referred color samples out or need 16bit support for other reasons, from trying to use this crate for that. |
It looks to me like the That means any questions about the format of
|
Maybe something like this would be good. diff --git a/src/encoder.rs b/src/encoder.rs
index b158ca1..0feb163 100644
--- a/src/encoder.rs
+++ b/src/encoder.rs
@@ -319,6 +319,10 @@ impl<W: Write> Writer<W> {
}
/// Writes the image data.
+ ///
+ /// `data` contains the serialized scanlines (before filtering is applied).
+ /// See [Scanlines](https://www.w3.org/TR/PNG/#7Scanline) in the PNG spec
+ /// for details.
pub fn write_image_data(&mut self, data: &[u8]) -> Result<()> {
const MAX_CHUNK_LEN: u32 = (1u32 << 31) - 1;
|
Yeah, I since had to use PNG for another crate. |
(@virtualritz |
@scurest please feel free to open a PR with that change. Might even be worth specifically saying that 16-bit encoding assume big endian. |
I was getting this output from my program which was supposed to generate an HSL color spectrum: Now, switching the endianness to big endian fixed that problem: Thank you epicly @scurest for your |
For those that were stuck like me, just take your let u8splitedVec = u16Vec.iter().flat_map(|&x| x.to_be_bytes()).collect::<Vec<u8>>(); |
How is data to be laid out in
data
fed towrite_image_data()
whenBitDepth::Sixteen
was set? This is always an[u8]
since this function has no variants.My data is obviously
[u16]
for theBitDepth::Sixteen
case.When I do a raw pointer typecast of my
[u16]
array to[u8]
the image I get has strange colors. Could be endianness – I'm on macOS.Can you document how to use this when the output image is 16bit/channel? Aka: at least a note in the docs but preferably a code snippet for an RGBA 16bit image?
The text was updated successfully, but these errors were encountered: