Skip to content

Integration

Using MetaMemory with Google Gemini

Google Gemini brings the full weight of Google's research infrastructure to embedding generation, and MetaMemory integrates natively with the Gemini embedding API. The gemini-embedding-001 model excels at multilingual understanding — it handles over 100 languages with consistent quality, making it the top choice for international deployments where memories span multiple languages. Gemini embeddings benefit from Google's massive pre-training corpus, which gives them unusually strong performance on technical and scientific content. If your agents interact with users in domains like medicine, law, or engineering, Gemini embeddings often capture domain-specific nuances that other models miss. MetaMemory's integration with Gemini includes automatic task-type selection: the system sets the appropriate task type parameter (retrieval_document for storage, retrieval_query for search) so that your embeddings are optimized for their intended use without any manual configuration. The Gemini API also offers generous rate limits and competitive per-token pricing, making it cost-effective at scale. MetaMemory handles all the specifics of the Gemini embedding format, including dimension handling and normalization, so switching from another provider is seamless. For teams already invested in Google Cloud infrastructure, Gemini is a natural fit that keeps your entire stack within one ecosystem while giving MetaMemory the high-quality vectors it needs.

Setup Guide

1

Create a Google AI API Key

Visit ai.google.dev and sign in with your Google account. Navigate to the API section and click "Get API Key" to generate a new key for the Gemini API. You can create a key directly or link it to an existing Google Cloud project for centralized billing and access management. Copy the generated key and store it securely. Ensure that the Generative Language API is enabled in your project settings.

2

Configure Gemini in MetaMemory

In your MetaMemory dashboard, navigate to Settings then Provider Keys and select "Google Gemini" as the provider. Paste your API key and confirm the default model as gemini-embedding-001. MetaMemory will automatically configure the correct task types for embedding generation and retrieval, so you do not need to set these manually. The system validates your key with a test request before saving.

3

Store and Retrieve Your First Memory

Send your first memory to MetaMemory using the API. The system routes your text through the Gemini embedding endpoint, generates all four vector types (semantic, emotional, process, context), and indexes the result for retrieval. Test the integration by storing a paragraph in any language and querying for it — Gemini's multilingual strength means your query can even be in a different language than the stored memory and still return relevant results.

Configuration Example

curl -X POST https://api.metamemory.tech/v1/providers \
  -H "Authorization: Bearer YOUR_METAMEMORY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "google-gemini",
    "api_key": "YOUR_GOOGLE_AI_API_KEY",
    "default_model": "gemini-embedding-001",
    "settings": {
      "task_type": "retrieval_document"
    }
  }'

Supported Models

gemini-embedding-001Default

Capabilities

EmbeddingsLLM

Ready to use Google Gemini with MetaMemory?

Get started in minutes. Connect your Google Gemini API key and give your agents persistent, intelligent memory.

Your agents deserve to remember

Bring your own AI keys. Integrate in minutes. Your data stays yours.