Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency transformers to v4.38.0 [security] #210

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jun 12, 2023

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
transformers ==4.21 -> ==4.38.0 age adoption passing confidence

GitHub Vulnerability Alerts

CVE-2023-2800

Insecure Temporary File in GitHub repository huggingface/transformers 4.29.2 and prior. A fix is available at commit 80ca92470938bbcc348e2d9cf4734c7c25cb1c43 and has been released as part of version 4.30.0.

CVE-2023-7018

Deserialization of Untrusted Data in GitHub repository huggingface/transformers prior to 4.36.

CVE-2023-6730

Deserialization of Untrusted Data in GitHub repository huggingface/transformers prior to 4.36.0.

CVE-2024-3568

The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the load_repo_checkpoint() function of the TFPreTrainedModel() class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of pickle.load() on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine.


Release Notes

huggingface/transformers (transformers)

v4.38.0: v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

Compare Source

New model additions
💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2b", device_map="auto", load_in_4bit=True
)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
  • Static Cache
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2b", device_map="auto"
)

model.generation_config.cache_implementation = "static"

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
Depth Anything Model

The Depth Anything model was proposed in Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the DPT architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.

image

Stable LM

StableLM 3B 4E1T was proposed in StableLM 3B 4E1T: Technical Report by Stability AI and is the first model in a series of multi-epoch pre-trained language models.

StableLM 3B 4E1T is a decoder-only base language model pre-trained on 1 trillion tokens of diverse English and code datasets for four epochs. The model architecture is transformer-based with partial Rotary Position Embeddings, SwiGLU activation, LayerNorm, etc.

The team also provides StableLM Zephyr 3B, an instruction fine-tuned version of the model that can be used for chat-based applications.

⚡️ Static cache was introduced in the following PRs ⚡️

Static past key value cache allows LlamaForCausalLM' s forward pass to be compiled using torch.compile !
This means that (cuda) graphs can be used for inference, which speeds up the decoding step by 4x!
A forward pass of Llama2 7B takes around 10.5 ms to run with this on an A100! Equivalent to TGI performances! ⚡️

⚠️ Support for generate is not included yet. This feature is experimental and subject to changes in subsequent releases.

from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache
import torch
import os

##### compilation triggers multiprocessing
os.environ["TOKENIZERS_PARALLELISM"] = "true"

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    device_map="auto",
    torch_dtype=torch.float16
)

##### set up the static cache in advance of using the model
model._setup_cache(StaticCache, max_batch_size=1, max_cache_len=128)

##### trigger compilation!
compiled_model = torch.compile(model, mode="reduce-overhead", fullgraph=True)

##### run the model as usual
input_text = "A few facts about the universe: "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda").input_ids
model_outputs = compiled_model(input_ids)
Quantization
🧼 HF Quantizer 🧼

HfQuantizer makes it easy for quantization method researchers and developers to add inference and / or quantization support in 🤗 transformers. If you are interested in adding the support for new methods, please refer to this documentation page: https://huggingface.co/docs/transformers/main/en/hf_quantizer

⚡️AQLM ⚡️

AQLM is a new quantization method that enables no-performance degradation in 2-bit precision. Check out this demo about how to run Mixtral in 2-bit on a free-tier Google Colab instance: https://huggingface.co/posts/ybelkada/434200761252287

🧼 Moving canonical repositories 🧼

The canonical repositories on the hugging face hub (models that did not have an organization, like bert-base-cased), have been moved under organizations.

You can find the entire list of models moved here: https://huggingface.co/collections/julien-c/canonical-models-65ae66e29d5b422218567567

Redirection has been set up so that your code continues working even if you continue calling the previous paths. We, however, still encourage you to update your code to use the new links so that it is entirely future proof.

Flax Improvements 🚀

The Mistral model was added to the library in Flax.

TensorFlow Improvements 🚀

With Keras 3 becoming the standard version of Keras in TensorFlow 2.16, we've made some internal changes to maintain compatibility. We now have full compatibility with TF 2.16 as long as the tf-keras compatibility package is installed. We've also taken the opportunity to do some cleanup - in particular, the objects like BatchEncoding that are returned by our tokenizers and processors can now be directly passed to Keras methods like model.fit(), which should simplify a lot of code and eliminate a long-standing source of annoyances.

Pre-Trained backbone weights 🚀

Enable loading in pretrained backbones in a new model, where all other weights are randomly initialized. Note: validation checks are still in place when creating a config. Passing in use_pretrained_backbone will raise an error. You can override by setting
config.use_pretrained_backbone = True after creating a config. However, it is not yet guaranteed to be fully backwards compatible.

from transformers import MaskFormerConfig, MaskFormerModel

config = MaskFormerConfig(
	use_pretrained_backbone=False, 
	backbone="microsoft/resnet-18"
)
config.use_pretrained_backbone = True

##### Both models have resnet-18 backbone weights and all other weights randomly
##### initialized 
model_1 = MaskFormerModel(config)
model_2 = MaskFormerModel(config)

Introduce a helper function load_backbone to load a backbone from a backbone's model config e.g. ResNetConfig, or from a model config which contains backbone information. This enables cleaner modeling files and crossloading between timm and transformers backbones.

from transformers import ResNetConfig, MaskFormerConfig
from transformers.utils.backbone_utils import load_backbone

##### Resnet defines the backbone model to load
config = ResNetConfig()
backbone = load_backbone(config)

##### Maskformer config defines a model which uses a resnet backbone
config = MaskFormerConfig(use_timm_backbone=True, backbone="resnet18")
backbone = load_backbone(config)

config = MaskFormerConfig(backbone_config=ResNetConfig())
backbone = load_backbone(config)

Add in API references, list supported backbones, updated examples, clarification and moving information to better reflect usage and docs

Image Processor work 🚀
Bugfixes and improvements 🚀
Significant community contributions

The following contributors have made significant changes to the library over the last release:

v4.37.2: Patch release v4.37.2

Compare Source

Selection of fixes

  • Protecting the imports for SigLIP's tokenizer if sentencepiece isn't installed
  • Fix permissions issue on windows machines when using trainer in multi-node setup

Configuration

📅 Schedule: Branch creation - "" in timezone Europe/Paris, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 13, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch 2 times, most recently from 63deab6 to a1d773c Compare June 13, 2023 18:10
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jun 13, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from a1d773c to 434bcae Compare June 18, 2023 06:36
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 18, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 434bcae to d01eb9f Compare June 18, 2023 10:41
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jun 18, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from d01eb9f to b58e1d4 Compare June 19, 2023 07:12
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 19, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from b58e1d4 to 42ba0c6 Compare June 19, 2023 12:19
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jun 19, 2023
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 21, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch 2 times, most recently from f65b5c4 to c8fe205 Compare June 21, 2023 18:18
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jun 21, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from c8fe205 to 10df671 Compare June 29, 2023 10:16
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 29, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch 3 times, most recently from 75fa803 to c8bd59d Compare June 29, 2023 11:39
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jun 29, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from c8bd59d to d0ed1a4 Compare July 4, 2023 19:08
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jul 4, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from d0ed1a4 to b267b13 Compare July 4, 2023 22:24
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.30.0 [security] Jul 4, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from b267b13 to fbf9c00 Compare July 6, 2023 08:20
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.30.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jul 6, 2023
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from fbf9c00 to 04ddf97 Compare July 6, 2023 15:32
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Apr 24, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch 3 times, most recently from 272080f to 2a2be0b Compare April 24, 2024 09:48
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] Apr 24, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 2a2be0b to b0d5777 Compare April 25, 2024 09:26
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Apr 25, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from b0d5777 to 8658249 Compare April 25, 2024 13:16
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] Apr 25, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 8658249 to 0eb0855 Compare May 1, 2024 09:50
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] May 1, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 0eb0855 to 83e8c1d Compare May 1, 2024 12:21
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] May 1, 2024
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] May 9, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch 2 times, most recently from 73bd55e to 299a36e Compare May 9, 2024 10:05
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] May 9, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 299a36e to f98cf6e Compare May 15, 2024 10:40
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] May 15, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from f98cf6e to 6e23805 Compare May 15, 2024 21:36
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] May 15, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 6e23805 to 53a4607 Compare May 17, 2024 16:36
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] May 17, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 53a4607 to 8a5c6a4 Compare May 17, 2024 16:37
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] May 17, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 8a5c6a4 to 527c2a0 Compare June 4, 2024 11:20
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.38.0 [security] chore(deps): update dependency transformers to v4.21.3 [security] Jun 4, 2024
@renovate renovate bot force-pushed the renovate/pypi-transformers-vulnerability branch from 527c2a0 to 426e45e Compare June 4, 2024 12:17
@renovate renovate bot changed the title chore(deps): update dependency transformers to v4.21.3 [security] chore(deps): update dependency transformers to v4.38.0 [security] Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

0 participants