Gemini Model
Select the model to generate the analysis.
Compute in 32-bit precision (caution ⚠️)
Consider applying caching for speed
Consider 8-bit/4-bit quantization
Model is compatible with torch.compile
Model and hardware support FP8 precision