---
name: mem0-polardbx
version: 1.0.0
description: |
  Persistent memory for OpenClaw agents backed by PolarDB-X Standard Edition.
  Use when users say:
  - "install mem0-polardbx"
  - "setup polardbx memory"
  - "用 PolarDB-X 做记忆存储"
  - "配置 mem0 + PolarDB-X"
  - "openclaw memory polardbx"
  - "memory not working" (when polardbx-mem0 is installed)
  - "记忆插件"
  - "向量存储"
keywords:
  - mem0
  - polardbx
  - vector store
  - persistent memory
  - agent memory
  - openclaw memory
  - 向量搜索
  - 长期记忆
metadata:
  author: polardbx
  openclaw:
    emoji: "🐬"
---

# mem0-polardbx

mem0-polardbx 为 OpenClaw 提供基于 **PolarDB-X Standard Edition** 的持久记忆能力。它使用 mem0 开源版作为记忆引擎，PolarDB-X 原生 VECTOR 列 + HNSW 索引作为向量存储后端，DashScope text-embedding-v3 作为向量化服务。

与托管记忆服务不同，mem0-polardbx 让你完全掌控数据 — 所有记忆存储在你自己的 PolarDB-X 实例中。

## Trigger phrases

Use this skill when the user expresses intent to set up, configure, or troubleshoot memory backed by PolarDB-X. Common triggers:

- "install mem0-polardbx" / "安装 mem0"
- "setup polardbx memory" / "配置 PolarDB-X 记忆"
- "用 PolarDB-X 做向量存储"
- "openclaw 长期记忆"
- "memory capture failed" / "记忆不工作"

## When to use this skill

- User wants OpenClaw to persist memories across sessions using PolarDB-X
- User wants a self-hosted / data-sovereign memory solution (not a third-party cloud)
- User already has or wants to create a PolarDB-X Standard Edition instance
- User mentions mem0 open-source mode with PolarDB-X

## When NOT to use this skill

- User wants a managed memory cloud (suggest mem9 instead)
- User wants to use Qdrant, Redis, or pgvector as vector store (use the base polardbx-mem0 plugin directly)
- Temporary conversation context that only matters in the current session

## What You Get

| Capability | Description |
| --- | --- |
| **autoCapture** | Automatically extract and store memories from every conversation |
| **autoRecall** | Automatically inject relevant memories before each LLM call |
| **Vector Search** | PolarDB-X native HNSW index with VEC_DISTANCE for ANN search |
| **Full Control** | All data in your own PolarDB-X instance, no third-party dependency |

## Definition of Done

This task is NOT complete until ALL of the following are true:

1. PolarDB-X instance is reachable and database `mem0_openclaw` exists
2. DashScope API key is verified (supports embeddings endpoint)
3. Plugin `@polardb/polardbx-mem0` is installed
4. `openclaw.json` is configured correctly
5. `pg` module ESM patch is applied (Node.js 24+)
6. OpenClaw gateway restarted without errors
7. A test message triggers successful capture (verified in DB)
8. Agent sends the structured handoff (Step 9)

---

## Onboarding

### Step 0 — Check prerequisites

`[AGENT]` Ask the user:

> Which setup do you want?
>
> 1. **Quick start** — Create a free PolarDB-X instance automatically (via zero.polardbx.com)
> 2. **Existing instance** — I already have a PolarDB-X Standard Edition instance

```bash
# Variables to collect
PXD_HOST=""
PXD_PORT="3306"
PXD_USER=""
PXD_PASS=""
PXD_DB="mem0_openclaw"
DASHSCOPE_KEY=""
```

### Step 1 — Create PolarDB-X instance

> Skip if user chose "Existing instance" in Step 0.

`[AGENT]`

```bash
PXD_RESPONSE=$(curl -sX POST https://zero.polardbx.com/api/v1/instances \
  -H 'Content-Type: application/json' \
  -d '{"tag":"mem0-polardbx","ttlMinutes":43200}')

echo "$PXD_RESPONSE" | jq .

PXD_HOST=$(echo "$PXD_RESPONSE" | jq -r '.connection.host')
PXD_PORT=$(echo "$PXD_RESPONSE" | jq -r '.connection.port')
PXD_USER=$(echo "$PXD_RESPONSE" | jq -r '.connection.username')
PXD_PASS=$(echo "$PXD_RESPONSE" | jq -r '.connection.password')
```

> Free instance limits: max 2 active instances per IP, TTL max 30 days (43200 minutes).

Verify connectivity:

```bash
mysql -h "$PXD_HOST" -P "$PXD_PORT" -u "$PXD_USER" -p"$PXD_PASS" -e "SELECT VERSION();" \
  && echo "OK" || echo "UNREACHABLE"
```

### Step 2 — Obtain DashScope API key

`[AGENT]` **Priority: extract from existing openclaw.json first.**

```bash
# Scan openclaw.json for DashScope-compatible API keys
DASHSCOPE_KEY=$(jq -r '
  [.models.providers // {} | to_entries[] |
   select(.value.baseUrl // "" | test("dashscope")) |
   .value.apiKey // empty] |
  first // empty
' openclaw.json 2>/dev/null)

# Validate: must be standard sk- prefix (not sk-sp- which is a service-provider key)
if [ -n "$DASHSCOPE_KEY" ] && echo "$DASHSCOPE_KEY" | grep -qE '^sk-[a-f0-9]{32}$'; then
  echo "Found DashScope API key from openclaw.json: ${DASHSCOPE_KEY:0:8}..."
else
  DASHSCOPE_KEY=""
  echo "No usable DashScope API key found in openclaw.json"
fi
```

If a key was found, verify it supports embeddings:

```bash
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
  -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/embeddings \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $DASHSCOPE_KEY" \
  -d '{"model":"text-embedding-v3","input":["test"],"dimensions":1024}')

if [ "$HTTP_CODE" = "200" ]; then
  echo "API key verified: embeddings endpoint OK"
else
  DASHSCOPE_KEY=""
  echo "API key rejected (HTTP $HTTP_CODE) — need a different key"
fi
```

**If no usable key was found**, tell the user:

> Your current openclaw.json does not contain a DashScope API key that supports the embeddings endpoint.
>
> **Known limitation:** Keys with `sk-sp-` prefix (service-provider keys from coding.dashscope.aliyuncs.com) only work for chat completions, NOT for embeddings.
>
> Please create a standard API key at: https://dashscope.console.aliyun.com/apiKey
>
> The key must:
> - Start with `sk-` (not `sk-sp-`)
> - Have access to the `text-embedding-v3` model

Then wait for the user to provide the key:

```bash
DASHSCOPE_KEY="<user-provided-key>"
```

### Step 3 — Create database

`[AGENT]`

```bash
mysql -h "$PXD_HOST" -P "$PXD_PORT" -u "$PXD_USER" -p"$PXD_PASS" \
  -e "CREATE DATABASE IF NOT EXISTS $PXD_DB; SHOW DATABASES LIKE 'mem0%';"
```

### Step 4 — Install plugin

`[AGENT]` Try `openclaw plugins install` first. If it fails (ClawHub rate limit, network issues), fall back to tgz method.

**Method A — Direct install (preferred):**

```bash
openclaw plugins install @polardb/polardbx-mem0
```

**Method B — Fallback via tgz (if Method A fails):**

```bash
# Download tgz from npm mirror
cd /tmp && npm pack @polardb/polardbx-mem0 --registry https://registry.npmmirror.com

# Install from tgz — openclaw will handle extraction, deps, and config registration
openclaw plugins install /tmp/polardb-polardbx-mem0-*.tgz
```

> **Important:** Always use `openclaw plugins install` (not manual extraction). It registers the plugin in `plugins.installs`, `plugins.entries`, and `plugins.slots` automatically. Manual `tar xzf` + hand-editing config will NOT be discovered by openclaw.

After successful install, you should see:
```
Installed plugin: polardbx-mem0
Restart the gateway to load plugins.
```

### Step 5 — Configure openclaw.json

`[AGENT]` `openclaw plugins install` (Step 4) already registered `plugins.slots.memory`, `plugins.entries.polardbx-mem0.enabled`, and `plugins.installs`. You only need to fill in the plugin config with PolarDB-X and DashScope settings.

```bash
jq \
  --arg host "$PXD_HOST" \
  --arg port "$PXD_PORT" \
  --arg user "$PXD_USER" \
  --arg pass "$PXD_PASS" \
  --arg db   "$PXD_DB" \
  --arg key  "$DASHSCOPE_KEY" \
'
  .plugins.entries."polardbx-mem0".config = {
    "mode": "open-source",
    "autoCapture": true,
    "autoRecall": true,
    "oss": {
      "vectorStore": {
        "provider": "polardbx",
        "config": {
          "host": $host,
          "port": ($port | tonumber),
          "user": $user,
          "password": $pass,
          "database": $db,
          "dimension": 1024
        }
      },
      "embedder": {
        "provider": "openai",
        "config": {
          "model": "text-embedding-v3",
          "apiKey": $key,
          "baseURL": "https://dashscope.aliyuncs.com/compatible-mode/v1",
          "dimensions": 1024
        }
      },
      "llm": {
        "provider": "openai",
        "config": {
          "model": "glm-5",
          "apiKey": $key,
          "baseURL": "https://dashscope.aliyuncs.com/compatible-mode/v1"
        }
      }
    }
  }
' openclaw.json > tmp.json && mv tmp.json openclaw.json
```

### Step 6 — Patch pg module ESM compatibility

> Required for Node.js 24+. The `mem0ai` package imports `pg` with named ESM exports which fail in CJS mode.

`[AGENT]`

```bash
PLUGIN_DIR=$(openclaw plugins dir 2>/dev/null || echo "$HOME/.openclaw/extensions")/polardbx-mem0
INDEX_MJS="$PLUGIN_DIR/node_modules/mem0ai/dist/oss/index.mjs"

if [ -f "$INDEX_MJS" ] && grep -q 'import { Client } from "pg"' "$INDEX_MJS"; then
  sed -i.bak 's/import { Client } from "pg"/import pg_default from "pg"; const { Client } = pg_default/' "$INDEX_MJS"
  echo "Patched pg ESM import in index.mjs"
else
  echo "No patch needed (already patched or pg import not found)"
fi
```

### Step 7 — Restart OpenClaw

`[AGENT]`

```bash
openclaw gateway restart
```

Wait 10 seconds, then check status:

```bash
sleep 10
openclaw gateway status
```

Check logs for plugin loading:

```bash
tail -30 /tmp/openclaw/openclaw-$(date +%Y-%m-%d).log 2>/dev/null | \
  grep -i -E 'polardbx-mem0|error|initialized'
```

Expected: `polardbx-mem0: initialized (mode: open-source, ...)` with no ERROR lines after it.

### Step 8 — Verify end-to-end

`[AGENT]` Send a test message:

```bash
curl -s -X POST http://127.0.0.1:18789/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"glm-5","messages":[{"role":"user","content":"Please remember: my name is Test User and I love coding in Rust."}]}'
```

Wait 60 seconds for async capture to complete, then verify data in PolarDB-X:

```bash
sleep 60
mysql -h "$PXD_HOST" -P "$PXD_PORT" -u "$PXD_USER" -p"$PXD_PASS" "$PXD_DB" \
  -e "SELECT id, payload FROM mem0_vectors;"
```

Expected: at least 1 row with extracted memory in the `payload` JSON field.

If the table is empty after 60s, check logs for capture errors:

```bash
tail -50 /tmp/openclaw/openclaw-$(date +%Y-%m-%d).log 2>/dev/null | \
  grep -i -E 'capture failed|error'
```

### Step 9 — What's Next

`[AGENT]` After successful setup, send this structured handoff:

```text
Your mem0-polardbx setup is complete.

WHAT'S WORKING
- autoCapture: Memories are automatically extracted from every conversation
- autoRecall: Relevant memories are injected before each LLM call
- Storage: All vectors stored in your PolarDB-X instance

YOUR CONNECTION INFO
- PolarDB-X Host: <PXD_HOST>
- Database: mem0_openclaw
- Table: mem0_vectors (with HNSW vector index)

RECOVERY
If your PolarDB-X instance expires (free tier TTL), create a new one via Step 1
and update the host/user/password in openclaw.json. Previous memories will be lost
unless you back up the mem0_vectors table.

BACKUP
Export your memories periodically:
  mysqldump -h <HOST> -u <USER> -p<PASS> mem0_openclaw mem0_vectors > mem0_backup.sql
```

---

## Configuration Reference

| Field | Type | Required | Description |
| --- | --- | --- | --- |
| `mode` | string | Yes | Must be `"open-source"` |
| `autoCapture` | boolean | No | Auto-store memories after each turn (default: true) |
| `autoRecall` | boolean | No | Auto-inject memories before each turn (default: true) |
| `oss.vectorStore.provider` | string | Yes | Must be `"polardbx"` |
| `oss.vectorStore.config.host` | string | Yes | PolarDB-X hostname |
| `oss.vectorStore.config.port` | number | No | Default: 3306 |
| `oss.vectorStore.config.user` | string | Yes | Database username |
| `oss.vectorStore.config.password` | string | Yes | Database password |
| `oss.vectorStore.config.database` | string | Yes | Database name |
| `oss.vectorStore.config.dimension` | number | Yes | Must match embedder output (1024 for text-embedding-v3) |
| `oss.embedder.provider` | string | Yes | `"openai"` (DashScope uses OpenAI-compatible API) |
| `oss.embedder.config.model` | string | Yes | `"text-embedding-v3"` recommended |
| `oss.embedder.config.apiKey` | string | Yes | Standard DashScope API key (`sk-` prefix) |
| `oss.embedder.config.baseURL` | string | Yes | `"https://dashscope.aliyuncs.com/compatible-mode/v1"` |
| `oss.embedder.config.dimensions` | number | Yes | Must be 1024 |
| `oss.llm.provider` | string | Yes | `"openai"` |
| `oss.llm.config.model` | string | Yes | `"glm-5"` or any DashScope chat model |
| `oss.llm.config.apiKey` | string | Yes | Same DashScope API key |
| `oss.llm.config.baseURL` | string | Yes | `"https://dashscope.aliyuncs.com/compatible-mode/v1"` |

## Troubleshooting

| Symptom | Fix |
| --- | --- |
| `VECTOR(1024)` SQL syntax error | Ensure plugin version >= 0.4.2 (`openclaw plugins install @polardb/polardbx-mem0@latest`) |
| `Named export 'Client' not found` from pg | Run Step 6 to patch the ESM/CJS import in mem0ai |
| `401 Incorrect API key` on embeddings | Your key is likely `sk-sp-` prefixed (service-provider key). Need standard `sk-` key from https://dashscope.console.aliyun.com/apiKey |
| `404 status code (no body)` on embeddings | baseURL is wrong. Must be `https://dashscope.aliyuncs.com/compatible-mode/v1` (not `coding.dashscope.aliyuncs.com`) |
| `Cannot determine distance type for VEC_DISTANCE, index is not found` | Vector index missing. Run: `mysql ... -e "ALTER TABLE mem0_vectors ADD VECTOR INDEX vec_idx(\`vector\`);"` |
| Plugin not loading | Check `plugins.slots.memory = "polardbx-mem0"` and `plugins.entries.polardbx-mem0.enabled = true` in openclaw.json |
| `capture failed` but no specific error | Check that both embedder and LLM configs have correct apiKey and baseURL |
| Empty `mem0_vectors` after capture | autoCapture is async; wait 60s. Also check logs for silent errors |
| PolarDB-X connection refused | Verify host/port/user/password. Free instances expire after TTL — create a new one if needed |

## Recovery

If your PolarDB-X free instance expires:

1. Create a new instance (Step 1)
2. Update `openclaw.json` with new host/user/password (Step 5)
3. Create the database again (Step 3)
4. Restart OpenClaw (Step 7)

> Previous memories will be lost when a free instance expires. For production use, consider a paid PolarDB-X instance or periodic backups via `mysqldump`.

## Communication Style

When presenting onboarding or recovery instructions:

- Use plain product language, avoid backend jargon
- Prefer "PolarDB-X instance" over "MySQL endpoint"
- Prefer "DashScope API key" over "embedding service credential"
- Always remind users that `sk-sp-` keys won't work for embeddings

## Update

Do not set up automatic self-updates for this skill. Only update when the user or maintainer explicitly asks for a refresh.
