Gemini

Configure Google Gemini as an LLM provider in agentgateway.

Configuration

Review the following example configuration.
# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - ai:
          name: gemini
          provider:
            gemini:
              # Optional; overrides the model in requests
              model: gemini-1.5-flash
      policies:
        backendAuth:
          key: "$GEMINI_API_KEY"
Review the following example configuration.
SettingDescription
ai.nameThe name of the LLM provider for this AI backend.
ai.provider.gemini.modelOptionally set the model to use for requests. If set, any models in the request are overwritten. If not set, the request must include the model to use.
backendAuthGemini uses API keys for authentication. You can optionally configure a policy to attach an API key that authenticates to the LLM provider on outgoing requests. If you do not include an API key, each request must pass in a valid API key.
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.