Skip to content

Releases: chaiNNer-org/chaiNNer

Alpha v0.14.0

27 Oct 17:02
Compare
Choose a tag to compare

After a long wait, 0.14.0 is finally here. This update brings a couple of pretty important changes.

One of the biggest changes is improved video support. Many of you have found that the current video support isn't great, as it has limited encoding options which tend to not work depending on the machine. This is due to how OpenCV's video support works (or I guess, doesn't work), which I won't go into detail about here. As of this update, the video support has been revamped to use a fully-featured FFMPEG build. This means that now we are able to have many more encoding options compared to before, so it gives you a lot more flexibility. It also means that it's pretty much guaranteed to work on any system now. This is a huge improvement that is long overdue.

Another long overdue change is a revamp of how auto-tiling works. For a long time, chaiNNer hid tiling options from users, and just automatically handled it behind the scenes. If you don't know, tiling is something that many upscaling applications implement to get around VRAM limitations when an upscale would use more VRAM than your GPU physically has. We eventually decided to expose manual tiling options to the users, but this still used our internal "split factor" system, leading to the option being called "number of tiles". This was temporary but stuck around longer than we thought it would, and it rightfully confused users.

Thanks to @RunDevelopment's tiling changes, we now have a similar but much better implementation that uses actual tile sizes. This is much more in line with other upscaling applications and is a bit easier to comprehend and plan around. We also have more options in the dropdown, as well as new VRAM estimation for NCNN. You'll see this as "Auto (estimate)" and "Maximum". Picking "Maximum" will function similarly to how it did previously, while "Auto" will function by pre-determining a tile size to use based on available VRAM. PyTorch has been using this method for a while, but now NCNN has it as well. And of course, now we have better options for picking actual tile sizes, meaning you can use the same tile size on multiple images and not have to worry about changing it all the time (if you need to set it manually, that is). We still generally recommend you use the Auto or Maximum modes though.

There's also a lot more in this update, so check out the full changelog below.

New Features / Major Changes

  • Proper "Tile Size" setting on upscale nodes (#1056, #1060, #1095, #1112, #1115)
    • This replaces the "Number of Tiles" dropdown
  • VRAM estimation for NCNN (#1068)
    • Similar to how PyTorch has been for a while, Auto mode now pre-estimates the required tile size based on available VRAM instead of auto-splitting.
    • This should hopefully cut down on the amount of vkQueueSubmit errors, since now we theoretically should prevent out-of-memory errors from happening.
  • Better video support in Video Frame Iterator (#1103, #1108, #1110, #1113, #1123)
    • We are now using a proper FFMPEG build to handle video iteration. This means we have unlocked far more encoding settings to make available for you to use, plus added stability.
  • Paste image from clipboard directly into chaiNNer (#1102)
  • Compression option for saving an image as JPEG or WEBP (#1126)
  • Added NCNN model preview and type information to the Load Model node, similar to PyTorch (#1124, #1133, #1147)

Other Changes

  • Added Lightness slider to Hue & Saturation node (#1080)
  • Changed how brightness in Brightness & Contrast node works (#1081)
  • Improved the internal Nvidia GPU check (#1083)
  • Changed how Change Colorspace defines "from" and "to" values (#1089, #1138)
  • More blend modes (#1072)
  • Added HSL and CMYK to Change Colorspace (#1068)
  • Double-click a slider to reset its value (#1097)
  • Renamed "Amount" to "Radius" for all blurs (#1098)
  • Slightly improved logging to reduce both spam and user confusion (#1116)
  • Adjusted PyTorch's VRAM estimation values tio allow more VRAM use at once (#1120)
  • Improved Text Append default values and made separator optional (#1125)
  • Improved some error messages (#1121, #1134, #1137, #1141)
  • Allow middle-click panning, even over nodes (#1063)

New Nodes

  • Text Padding
    • Add padding to any text or number, mainly for formatting numbers to a specific number of characters (such as converting "2" into "000002")

Bug Fixes

  • Improved output type for crop content node (#1066)
  • Fixed light theme issue on node timers (#1119)
  • Fixed bugs that allowed improper cross-iterator connections (#1128)
  • Fixed copying grayscale images (#1132)
  • Prevented some crashing when certain node import errors occur (#1135)
  • Fixed image iterator improperly sorting lower/uppercase (#1144, #1145)
  • Made Sharpen consistent with blur nodes (#1150)

PS, we also now have nightly builds, which you can find here. Feel free to try these at any time to test out the newest features ahead of release.

As always, thanks to the main contributors: @joeyballentine, @RunDevelopment, @theflyingzamboni

Alpha v0.13.1

14 Oct 03:13
Compare
Choose a tag to compare

Sorry for the delay in updates. The initial plan was to release a smaller update soon after 0.13.0 got released, but we ended up delaying it due to some issues with some upcoming things we merged. Those features will be out eventually, but for now I figured I'd release this small patch update with a few changes/bugfixes.

Changes

  • Node categories will now display an indicator if any nodes in that category are missing, and will let you know there are extra dependencies you need to download. We had something similar before, but this is much more robust. For example, it should be much more obvious now when you are missing the newer torchvision & facexlib dependencies. (#1047)
  • The utility category is now above PyTorch/NCNN/ONNX, at the very bottom of the "built-in" nodes (that do not require extra dependencies), so don't be alarmed if you don't see it in its usual spot. (#1051)
  • Added video name and directory to Video Frame Iterator (#1050)
  • Connections can now be removed with a double click (#1055)

Bug fixes

  • Fixes an issue where some nodes would throw an error if used directly after upscaling a grayscale image (#1070)

Alpha v0.13.0

29 Sep 00:31
Compare
Choose a tag to compare

Before we begin, you might notice chaiNNer has a new home! This repository now lives in an organization (chaiNNer-org) rather than my personal GitHub account. This means that the URL to the repository is now https://github.com/chaiNNer-org/chaiNNer. The old one should still redirect here though, but you should update any links you have (in descriptions, tutorials, etc) just in case.

This update brings a long-awaited addition: GFPGAN support! Luckily, we were able to add this without much hassle. However, it does not support the first v1 GFPGAN model (but it does support v1.2+). This is because GFPGAN v1 requires compiled CUDA extensions, which is not simple to support at this time. The good news is, the later models are much better anyway, so you are better off using those to begin with. GFPGAN support does require installing a new dependency package from the dependency manager, facexlib (as part of the PyTorch package collection), so make sure you do that!

To use GFPGAN with chaiNNer, there is now a new node: Face Upscale. You pass the loaded model into this node, and can optionally pass in an upscaled version of the background as well. This allows you to fully customize the background upscale, unlike the official GFPGAN code which only allows you to upscale with RealESRGAN at a fixed scale.

Speaking of scale, I did add an additional scale option for the GFPGAN output. While GFPGAN internally always does an 8x upscale, the official code as well as existing GUIs have included an output scale option (which just downscales the result). To ease confusion from people expecting this, I just decided to implement this as well. Unlike these other implementations though, the scale can be any number between 1 and 8, instead of just powers of 2. The important thing to keep in mind here is that adjusting the scale DOES NOT make it actually process with lower VRAM or anything like that.

Another important note: The first time you use GFPGAN, facexlib will automatically download some necessary models to /chaiNNer/python/gfpgan. This only happens for the first use, and depending on your internet speed might take a while. Just let it download and once finished everything should work fine.

For everything else about this release, see this full changelog:

New Features

  • GFPGAN (Face Upscaling/Restoration) support (#999) (#1018)
    • GFPGAN (as well as RestoreFormer) have been added to chaiNNer. Once loaded just like regular models, these models can be used with the Face Upscale node.
  • Dependency Manager improvements (#990)
    • The dependency manager now shows exactly which packages are installed, missing, and out of date.
    • Each dependency now has a description associated with it.
  • TGA saving support (#1033) (thanks @emarron)

New Nodes

  • Pass Through Node (#968)
    • This node simply passes the input value through to the output.
    • This is useful if you want to have one output connection go to multiple inputs, but also want to be able to easily swap what node gives that input.
  • Create Edges Node (#1009)
    • Like a cross between Create Border and Crop (Edges), this node creates a border around the image, but using adjustable numbers for each side.
  • Face Upscale (#999)
    • Used for processing with GFPGAN and RestoreFormer.

Other Changes

  • Removed Discord Rich-Presence (#1038)
    • This will eventually be added back, but for now it has been removed. It has come to our attention that it sometimes causes chaiNNer to not start up, and when it does work it leaks the name of chain files. While we work on resolving these issues, the feature has been temporarily removed.
  • Blend Image optimizations (#989)
    • Should see significantly better performance with this node, under certain circumstances.
  • Added timer support for iterators (#1021)
  • Various crop node improvements (#1013)
  • Changed input name for converted ONNX models to allow them to be compatible with VSGAN-tensorrt-docker (#1006)
  • Rename "relative path" to "subdirectory path" to make its use-case more obvious (#1022)

Bug Fixes:

  • Drastically fixed NCNN performance in iterators by only loading model once (#1023)
  • Fixed FPS values for videos (via the Video Frame Iterator) getting rounded in the output (#987)
  • Fixed Fill Alpha node (#992)
  • Fixed Canny Edge Detection node (#1010)
  • Fixed image iterator sorting (#1004)
  • Fixed output type of Text Pattern node (#1012)
  • Fixed output type of Crop (Border) node (#1013)
  • Fixed Linux bug due to missing qt environment variable (#1016)
  • Fixed pixelunshuffle ESRGAN models (RealESRGANx2 mainly) erroring due to uneven image sizes (#1017)
  • Show correct install size when only some packages are installed in the dependency manager (#1030)

Thanks to @RunDevelopment and @theflyingzamboni for their various contributions as always.

Alpha v0.12.7

18 Sep 02:54
Compare
Choose a tag to compare

Another minor release that fixes some bugs and makes a couple of changes.

Bug Fixes

  • Fix ONNX node issues (#970) (#980)
    • Mainly, fixed Convert to ONNX being broken.
  • Lower max PyTorch VRAM amount again (#981)
    • There have been some reports that the increased VRAM is making it actually slower overall, so this brings it back down closer to what it was originally.
  • Clear chaiNNer's internal cache when a node gets cleared (#978)
  • Fixed starting nodes not running after clearing (#974)

Other

  • Improve the update check on startup (#971)
    • From now on, the update checker will give a list of the main changes as well as tell you your current version, rather than only telling you that an update is available.
  • Pre-optimize ONNX models via constant folding on convert from PyTorch (#969)
  • Shorter type error messages (#975)

Alpha v0.12.6

15 Sep 20:45
Compare
Choose a tag to compare

This hotfix update attempts to fix a few issues reported in the last update. Apologies to anyone affected by these things.

Bug Fixes

  • Fix pasteboard install on M1 macOS causing crash on startup (#963)
    • We did not realize after adding a new required dependency for the Copy to Clipboard node that the pasteboard package is only built for x64 macOS. I'm working out how to add support for pre-built arm64 wheels, but for now pasteboard just won't try to install on M1. This means copy to clipboard won't work on that platform while we work this out.
  • Fix PyTorch's Convert to ONNX node (#962)
    • This node was accidentally not updated after a separate ONNX fix, and therefore connecting the resulting model to any of the ONNX inputs caused an error. This has been resolved
    • This was discovered to still have an issue. Working on it now.
  • Fixed Shift node output typing (#966) (thanks @RunDevelopment)

Other

  • Better generated error reports (#957) (thanks @RunDevelopment)
    • Generates better error reports when a crash happens in the main process. This is more of a helpful feature for us devs rather than for the end user.
  • Lower VRAM cap slightly (#967)
    • Got a report saying PyTorch was hogging more VRAM than it should have been, so I've decreased the value I increased last update. It's still more than it was in 0.12.4, but hopefully now it's at just the right place for a balance of optimal performance and stability.

Alpha v0.12.5

15 Sep 03:11
Compare
Choose a tag to compare

This update adds a few improvements as well as some extra features.

One thing I'm not sure about is if I fixed a common NCNN issue. Please let me know if this solves any of your previous issues, if you had any.

Dependency Updates

  • NCNN
    • NCNN now auto updates if installed, so you don't have to update it manually anymore. This will ensure that any changes I make to the NCNN bindings will not cause chaiNNer to suddenly stop working due to outdated NCNN.

New Features

  • Add ONNX execution options to settings (#931)
    • This allows you to select a GPU to use for ONNX processing, as well as picking an execution engine. If you have TensorRT set up properly on your system, this also means you can select TensorRT. This should theoretically give you much faster speeds when doing batch processing (just make sure to put the load model node outside the iterator. TensorRT takes a long time to convert the model to an engine)
  • Reporting all type mismatches (#939) (thanks @RunDevelopment)
    • We will now warn you if nodes that were previously compatible have suddenly become incompatible due to an upstream change, even if no custom error message has been set.
  • "Soft light" blend mode (#941) (thanks @JustNoon)
    • This is a new blend mode in the Blend Images node
  • Show proper error message on integrated python download failure (#949) (thanks @RunDevelopment)

New Nodes

  • Copy To Clipboard (#920) (thanks @Sryvkver)
    • This node allows copying an image, text, or number to the clipboard. You can find it in the utilities section.

Other Changes

  • Instead of attempting to update the required dependencies every startup, it will now do so only when needed. (#934)
  • Increased the max amount of VRAM PyTorch will use before tiling further in auto mode. Should improve performance a little bit more (#940)
  • PyTorch's Convert To NCNN node now will not hide the outputs of the node when ONNX is not installed and instead will warn the user about it when attempting to run it. (#952)

Bug Fixes

  • Fix ONNX nodes reloading on every upscale. Now it loads once in Load Model as it should. (#933)
  • Fixed the "FATAL ERROR!" message some users would get in their logs with NCNN during upscaling. (#947)
  • Potentially fixed other NCNN upscale issues, but need users to confirm for me.
  • Fixed NCNN GPU selector order problem (#948)
  • Improved modulo typing in Math node (#938) (thanks @RunDevelopment)
  • Added pow typing in Math node (#936) (thanks @RunDevelopment)

Alpha v0.12.4

10 Sep 18:40
Compare
Choose a tag to compare

This is another smaller update that adds a couple of new things and fixes a few bugs.

New Features

  • GPU Selector for PyTorch & NCNN (#919)
    • Long overdue, now you can select what GPU you want to use for PyTorch or NCNN. This is great for NCNN users as now you can have chaiNNer use your dedicated GPU instead of defaulting to your integrated GPU.
    • This is in the "Python" tab in settings, in the PyTorch and NCNN sub-tabs
    • If you have any issues that seem to stem from this change, please let me know.

Bug Fixes

  • Fixed loading RealESRGAN model loading at scales other than 4x (tested with 8x and 2x) (#921)

New Nodes

  • Resize to Side (#910) (thanks @BigBoyBarney)
    • Lets you resize conditionally based on the properties of the images

Changes

  • Moved CPU & FP16 settings to the PyTorch sub-tab of the Python tab in settings
  • Added icons to settings tabs (#922)
  • Added modulo operator to Math node (#908)

Alpha v0.12.3

07 Sep 00:46
Compare
Choose a tag to compare

I accidentally broke PyTorch model loading in the last release, so this is merely a hotfix to fix that.

Bug Fixes

  • Fix PyTorch model loading (#915)

Alpha v0.12.2

06 Sep 21:26
Compare
Choose a tag to compare

This update fixes a few important bugs (and some other things, of course). One major bug I found was the FP16 processing mode for PyTorch not working correctly. With this update, you should notice a significant performance improvement.

Bug Fixes

  • Fixes FP16 casting not working as expected with PyTorch (#912)
  • Hopefully prevents VRAM out-of-memory errors for PyTorch in "auto" mode (#912)
  • Hopefully fixes the "Failed to fetch" error some users were getting on first launch (#911)

New Features

  • Allow iterators to use the "drag a connection out to the pane" context menu, in the iterator editor zone (#905)
    • This does not include the right-click version of this menu at this time

Changes

  • Error after iteration is finished instead of during, to avoid interrupting batch processing (#901)
  • Clear starting node cache on node deletion to prevent memory leak (#909)
  • Include name of model in load model error (#902)
  • Add note about fp16 models to NCNN's Save Model node's description (#907)

Alpha v0.12.1

05 Sep 16:03
Compare
Choose a tag to compare

This is a small release that fixes a few issues noticed in v0.12.0 as well as adds a few things from contributors.

Bug Fixes

  • Fix ONNX Interpolate Models description (#894) (thanks @theflyingzamboni)
  • Fix Convert To ONNX and Convert To NCNN requiring PyTorch to have CUDA support if the CPU option was off (#896)
  • Fix ONNX in_nc detection to theoretically allow more unofficially supported ONNX models to work properly (#890) (thanks @theflyingzamboni)
  • Fix caption overflow if the width is too small for text (#899)

New Features