SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
-
Updated
Jun 2, 2024 - Python
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)
Differentiable Fluid Dynamics Package
Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral. ⚪Grey-box and ⚫Black-box approaches.
Everything we actually know about the Apple Neural Engine (ANE)
Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion
Artificial Intelligence
DECIMER: Deep Learning for Chemical Image Recognition using Efficient-Net V2 + Transformer
Google Coral TPU DKMS Driver package for Fedora, RHEL, OpenSUSE, and OpenMandriva
Solana TpuClient Typescript Implementation
Jax/Flax implementation of DeiT and DeiT-III (ViT)
Automated KRAI X workflows for Google Cloud Platform
<케라스 창시자에게 배우는 딥러닝 2판> 도서의 코드 저장소
Add a description, image, and links to the tpu topic page so that developers can more easily learn about it.
To associate your repository with the tpu topic, visit your repo's landing page and select "manage topics."