Getting Started
AxiomDB is a relational database engine written in Rust. It supports standard SQL, ACID transactions, a Write-Ahead Log for crash recovery, and a Copy-on-Write B+ Tree for lock-free concurrent reads. This guide walks you through connecting to AxiomDB, choosing a usage mode, and running your first queries.
Choosing a Usage Mode
AxiomDB operates in two distinct modes that share the exact same engine code.
Server Mode
The engine runs as a standalone daemon that speaks the MySQL wire protocol on TCP port 3306 (configurable). Any MySQL-compatible client connects without installing custom drivers.
Application (PHP / Python / Node.js)
│
│ TCP :3306 (MySQL wire protocol)
▼
axiomdb-server process
│
▼
axiomdb.db axiomdb.wal
When to use server mode:
- Web applications with REST or GraphQL APIs
- Microservices where multiple processes share a database
- Any environment where you would normally use MySQL
Embedded Mode
The engine is compiled into your process as a shared library (.so / .dylib / .dll).
There is no daemon, no network, and no port. Calls go directly to Rust code with
microsecond latency.
Your Application (Rust / C++ / Python / Electron)
│
│ direct function call (C FFI / Rust crate)
▼
AxiomDB engine (in-process)
│
▼
axiomdb.db axiomdb.wal (local files)
When to use embedded mode:
- Desktop applications (Qt, Electron, Tauri)
- CLI tools that need a local database
- Python scripts that need fast local storage without a daemon
- Any context where SQLite would be considered
Mode Comparison
| Feature | Server Mode | Embedded Mode |
|---|---|---|
| Latency | ~0.1 ms (TCP loopback) | ~1 µs (in-process) |
| Multiple processes | Yes | No (one process) |
| Installation | Binary + port | Library only |
| Compatible clients | Any MySQL client | Rust crate / C FFI |
| Ideal for | Web, APIs, microservices | Desktop, CLI, scripts |
Interactive Shell (CLI)
The axiomdb-cli binary connects directly to a database file — no server needed.
It works like sqlite3 or psql:
# Open an existing database (or create a new one)
axiomdb-cli ./mydb.db
# Pipe SQL from a file
axiomdb-cli ./mydb.db < migration.sql
# One-liner
echo "SELECT COUNT(*) FROM users;" | axiomdb-cli ./mydb.db
Inside the shell:
AxiomDB 0.1.0 — interactive shell
Type SQL ending with ; to execute. Type .help for commands.
axiomdb> CREATE TABLE users (id INT, name TEXT);
OK (1ms)
axiomdb> INSERT INTO users VALUES (1, 'Alice'), (2, 'Bob');
2 rows affected (0ms)
axiomdb> SELECT * FROM users;
+----+-------+
| id | name |
+----+-------+
| 1 | Alice |
| 2 | Bob |
+----+-------+
2 rows (0ms)
axiomdb> .tables
users
axiomdb> .schema users
Table: users
id INT NOT NULL
name TEXT nullable
axiomdb> .quit
Bye.
Dot commands: .help · .tables · .schema [table] · .open <path> · .quit
Keyboard shortcuts (interactive mode): ↑ / ↓ history · Tab SQL completion · Ctrl-R reverse search · Ctrl-C cancel line · Ctrl-D exit. History is saved to ~/.axiomdb_history between sessions.
Server Mode — Connecting
Starting the Server
# Default: stores data in ./data, listens on port 3306
axiomdb-server
# Legacy env vars
AXIOMDB_DATA=/var/lib/axiomdb AXIOMDB_PORT=3307 axiomdb-server
# DSN bootstrap (Phase 5.15)
AXIOMDB_URL='axiomdb://0.0.0.0:3307/axiomdb?data_dir=/var/lib/axiomdb' axiomdb-server
The server is ready when you see:
INFO axiomdb_server: listening on 0.0.0.0:3306
AXIOMDB_URL is normalized in shared core code first, then the server accepts only the fields it actually supports in Phase 5.15 instead of silently inventing meanings for extra options.
In Phase 5.15, AXIOMDB_URL supports axiomdb://, mysql://,
postgres://, and postgresql:// URI syntax. The alias schemes are parse
aliases only: axiomdb-server still speaks the MySQL wire protocol only.
Supported server DSN fields:
- host and port from the URI authority
data_dirfrom the query string
Unsupported query params are rejected explicitly instead of being ignored.
Connecting with the mysql CLI
mysql -h 127.0.0.1 -P 3306 -u root
No password is required in Phase 5. Any username from the allowlist (root, axiomdb,
admin) is accepted. See the Authentication section below for details.
Connecting with Python (PyMySQL)
import pymysql
conn = pymysql.connect(
host='127.0.0.1',
port=3306,
user='root',
db='axiomdb',
charset='utf8mb4',
)
with conn.cursor() as cursor:
# CREATE TABLE with AUTO_INCREMENT
cursor.execute("""
CREATE TABLE users (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
name TEXT NOT NULL,
email TEXT NOT NULL
)
""")
# INSERT — last_insert_id is returned in the OK packet
cursor.execute("INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com')")
print("inserted id:", cursor.lastrowid)
# SELECT
cursor.execute("SELECT id, name FROM users")
for row in cursor.fetchall():
print(row)
conn.close()
INSERT statements, wrap them in
an explicit BEGIN ... COMMIT. Phase 5.21 stages consecutive
INSERT ... VALUES statements in one transaction and flushes them together,
which is much faster than committing each row independently.
Parameterized Queries and ORMs (Prepared Statements)
When you pass parameters to cursor.execute(), PyMySQL (and any MySQL-compatible
driver) automatically uses COM_STMT_PREPARE / COM_STMT_EXECUTE — the MySQL
binary prepared statement protocol. AxiomDB supports this natively from Phase 5.10.
import pymysql
conn = pymysql.connect(host='127.0.0.1', port=3306, user='root', db='axiomdb')
with conn.cursor() as cursor:
cursor.execute("""
CREATE TABLE products (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
name TEXT NOT NULL,
price DOUBLE NOT NULL,
active BOOL NOT NULL DEFAULT TRUE
)
""")
conn.commit()
# Parameterized INSERT — uses COM_STMT_PREPARE/EXECUTE automatically
cursor.execute(
"INSERT INTO products (name, price, active) VALUES (%s, %s, %s)",
('Wireless Keyboard', 49.99, True),
)
# NULL parameters work transparently
cursor.execute(
"INSERT INTO products (name, price, active) VALUES (%s, %s, %s)",
('USB-C Hub', 29.99, None),
)
# Parameterized SELECT
cursor.execute("SELECT id, name, price FROM products WHERE price < %s", (50.0,))
for row in cursor.fetchall():
print(row)
# Boolean column comparison works with integer literals (MySQL-compatible)
cursor.execute("SELECT name FROM products WHERE active = %s", (1,))
for row in cursor.fetchall():
print(row)
conn.close()
ORMs such as SQLAlchemy use parameterized queries for all data-bearing operations. Connecting through the MySQL dialect works without any additional configuration:
from sqlalchemy import create_engine, text
engine = create_engine("mysql+pymysql://root@127.0.0.1:3306/axiomdb")
with engine.connect() as conn:
result = conn.execute(
text("SELECT id, name FROM products WHERE price < :max_price"),
{"max_price": 40.0},
)
for row in result:
print(row)
cursor.execute(sql, params) sends a COM_STMT_PREPARE
to parse the SQL and register a statement ID, followed by COM_STMT_EXECUTE
with the binary-encoded parameters. The statement is cached per connection in AxiomDB
and released with COM_STMT_CLOSE when the cursor closes. This matches the
behavior expected by PyMySQL, mysqlclient, and SQLAlchemy's MySQL dialect.
Connecting with PHP (PDO)
<?php
$pdo = new PDO(
'mysql:host=127.0.0.1;port=3306;dbname=axiomdb',
'root',
'',
[PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION]
);
$stmt = $pdo->query('SELECT id, name FROM users LIMIT 5');
foreach ($stmt as $row) {
echo $row['id'] . ': ' . $row['name'] . "\n";
}
Connecting with any MySQL GUI
Point MySQL Workbench, DBeaver, or TablePlus to 127.0.0.1:3306. No driver
installation is required — the MySQL wire protocol is fully compatible.
Charset and collation
AxiomDB negotiates charset and collation at the MySQL handshake boundary. The client
sends its preferred collation id in the HandshakeResponse41 packet; the server reads
it and configures the session accordingly.
Supported charsets:
| Charset | Collation ids | Notes |
|---|---|---|
utf8mb4 | 45 (0900_ai_ci), 46 (0900_as_cs), 255 (0900_ai_ci) | Default for new connections |
utf8 / utf8mb3 | 33 (general_ci), 83 (bin) | BMP-only; 4-byte code points (emoji) rejected |
latin1 | 8 (swedish_ci), 47 (bin) | MySQL latin1 = Windows-1252 (0x80 = ‘€’, not ISO-8859-1) |
binary | 63 | Raw bytes, no transcoding |
You can change the session charset at any time:
SET NAMES utf8mb4; -- sets client + connection + results
SET NAMES latin1 COLLATE latin1_bin; -- with explicit collation
SET character_set_results = utf8mb4; -- results charset only
charset='utf8mb4' in your client connection string. The AxiomDB engine
stores everything as UTF-8; utf8mb4 requires zero transcoding overhead and supports
the full Unicode range including emoji. Latin1 connections are supported for legacy
PHP/MySQL applications.
Authentication
AxiomDB Phase 5 uses permissive authentication: the server accepts any password
for usernames in the allowlist (root, axiomdb, admin, and the empty string).
Both of the most common MySQL authentication plugins are supported with no client-side
configuration required:
| Plugin | Clients | Notes |
|---|---|---|
mysql_native_password | MySQL 5.x clients, older PyMySQL, mysql2 < 0.5 | 3-packet handshake (greeting → response → OK) |
caching_sha2_password | MySQL 8.0+ default, PyMySQL >= 1.0, MySQL Connector/Python | 5-packet handshake (greeting → response → fast_auth_success → ack → OK) |
If your client connects with MySQL 8.0+ defaults and you see silent connection drops,
your client is using caching_sha2_password — AxiomDB handles this automatically.
No --default-auth flag or authPlugin option is needed.
Full password enforcement with stored credentials is planned for Phase 13 (Security).
SET NAMES, SELECT @@version, SHOW DATABASES, etc.).
AxiomDB intercepts and stubs these automatically — no configuration needed.
Monitoring with SHOW STATUS
Monitoring tools, proxy servers, and health checks can query live server counters
using the standard MySQL SHOW STATUS syntax:
SHOW STATUS
SHOW GLOBAL STATUS
SHOW SESSION STATUS
SHOW STATUS LIKE 'Threads%'
SHOW GLOBAL STATUS LIKE 'Com_%'
Available variables:
| Variable | Scope | Description |
|---|---|---|
Uptime | Global | Seconds since server start |
Threads_connected | Global | Currently authenticated connections |
Threads_running | Global | Connections actively executing a command |
Questions | Session + Global | Total statements executed |
Bytes_received | Session + Global | Bytes received from clients |
Bytes_sent | Session + Global | Bytes sent to clients |
Com_select | Session + Global | SELECT statement count |
Com_insert | Session + Global | INSERT statement count |
Innodb_buffer_pool_read_requests | Global | Storage read requests (compatibility) |
Innodb_buffer_pool_reads | Global | Physical page reads (compatibility) |
Session scope (SHOW STATUS, SHOW SESSION STATUS, SHOW LOCAL STATUS) returns
per-connection values. Global scope (SHOW GLOBAL STATUS) returns server-wide totals.
Session counters reset when a connection is closed or COM_RESET_CONNECTION is issued.
Connection Timeout Variables
AxiomDB exposes the same timeout variables that MySQL clients expect at the session level:
SET wait_timeout = 30;
SET interactive_timeout = 300;
SET net_read_timeout = 60;
SET net_write_timeout = 60;
SELECT @@wait_timeout;
SELECT @@interactive_timeout;
SELECT @@net_read_timeout;
SELECT @@net_write_timeout;
Rules:
wait_timeoutapplies while a non-interactive connection is idle between commands.interactive_timeoutapplies instead when the client connected withCLIENT_INTERACTIVE.net_write_timeoutbounds packet writes once a command is already executing.net_read_timeoutis reserved for future in-flight protocol reads and is already validated/stored as a real session variable.COM_RESET_CONNECTIONresets all four variables back to their defaults.
Trying to set one of these variables to 0 or to a non-integer value returns an
error:
SET wait_timeout = 0;
-- ERROR ... wait_timeout must be a positive integer, got '0'
Embedded Mode — Rust API
Add AxiomDB to your Cargo.toml:
[dependencies]
axiomdb-embedded = { path = "../axiomdb/crates/axiomdb-embedded" }
Open a Database
use axiomdb_embedded::Db;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut db = Db::open("./axiomdb.db")?;
let mut db2 = Db::open_dsn("file:/tmp/axiomdb.db")?;
let mut db3 = Db::open_dsn("axiomdb:///tmp/axiomdb")?;
db.execute("CREATE TABLE users (id INT, name TEXT, age INT)")?;
db.execute("INSERT INTO users VALUES (1, 'Alice', 30)")?;
db.execute("INSERT INTO users VALUES (2, 'Bob', 25)")?;
let (columns, rows) = db.query_with_columns(
"SELECT id, name, age FROM users WHERE age > 20 ORDER BY name"
)?;
println!("{columns:?}");
for row in rows {
println!("{row:?}");
}
Ok(())
}
Db::open_dsn(...) accepts only local DSNs in Phase 5.15. Remote
wire-endpoint DSNs such as postgres://... parse successfully in the shared
parser but are rejected by the embedded API.
Explicit Transactions
#![allow(unused)]
fn main() {
let mut db = axiomdb_embedded::Db::open("./axiomdb.db")?;
db.begin()?;
db.execute("INSERT INTO accounts VALUES (1, 'Alice', 1000.0)")?;
db.execute("INSERT INTO accounts VALUES (2, 'Bob', 500.0)")?;
db.commit()?;
}
Embedded Mode — C FFI
For C, C++, Qt, or Java (JNI):
#include "axiomdb.h"
int main(void) {
AxiomDb* db = axiomdb_open("./axiomdb.db");
AxiomDb* db2 = axiomdb_open_dsn("file:/tmp/axiomdb.db");
if (!db) { fprintf(stderr, "failed to open\n"); return 1; }
axiomdb_execute(db, "CREATE TABLE users (id INT, name TEXT)");
axiomdb_execute(db, "INSERT INTO users VALUES (1, 'Alice')");
axiomdb_close(db);
axiomdb_close(db2);
return 0;
}
Python via ctypes
import ctypes
lib = ctypes.CDLL("./libaxiomdb.dylib")
lib.axiomdb_open.restype = ctypes.c_void_p
lib.axiomdb_open_dsn.restype = ctypes.c_void_p
lib.axiomdb_close.argtypes = [ctypes.c_void_p]
lib.axiomdb_execute.restype = ctypes.c_longlong
db = lib.axiomdb_open(b"./axiomdb.db")
db2 = lib.axiomdb_open_dsn(b"file:/tmp/axiomdb.db")
lib.axiomdb_execute(db, b"CREATE TABLE t (id INT)")
lib.axiomdb_close(db)
lib.axiomdb_close(db2)
Your First Schema — End to End
The following example creates a minimal e-commerce schema, inserts sample data, and runs a join query — all within embedded mode.
-- Create tables
CREATE TABLE products (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
name TEXT NOT NULL,
price DECIMAL NOT NULL,
stock INT NOT NULL DEFAULT 0
);
CREATE TABLE orders (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
product_id BIGINT NOT NULL REFERENCES products(id) ON DELETE RESTRICT,
quantity INT NOT NULL,
placed_at TIMESTAMP NOT NULL
);
CREATE INDEX idx_orders_product ON orders (product_id);
-- Insert data
INSERT INTO products (name, price, stock) VALUES
('Wireless Keyboard', 49.99, 200),
('USB-C Hub', 29.99, 500),
('Mechanical Mouse', 39.99, 150);
INSERT INTO orders (product_id, quantity, placed_at) VALUES
(1, 2, '2026-03-01 10:00:00'),
(2, 1, '2026-03-02 14:30:00'),
(1, 1, '2026-03-03 09:15:00');
-- Query with JOIN
SELECT
p.name,
o.quantity,
p.price * o.quantity AS line_total,
o.placed_at
FROM orders o
JOIN products p ON p.id = o.product_id
ORDER BY o.placed_at;
Expected output:
| name | quantity | line_total | placed_at |
|---|---|---|---|
| Wireless Keyboard | 2 | 99.98 | 2026-03-01 10:00:00 |
| USB-C Hub | 1 | 29.99 | 2026-03-02 14:30:00 |
| Wireless Keyboard | 1 | 49.99 | 2026-03-03 09:15:00 |
Bulk Insert — Best Practices
The way you issue INSERT statements has a large impact on throughput. AxiomDB is optimized for the multi-row VALUES form — one SQL string with all N rows:
-- Fast: one SQL string, all rows in one VALUES clause (~211K rows/s for 10K rows)
INSERT INTO products (name, price, stock) VALUES
('Widget A', 9.99, 100),
('Widget B', 14.99, 50),
('Widget C', 4.99, 200);
# Python — build one multi-row string, one execute() call
rows = [(f"product_{i}", i * 1.5, i * 10) for i in range(10_000)]
placeholders = ", ".join("(%s, %s, %s)" for _ in rows)
flat_values = [v for row in rows for v in row]
cursor.execute(f"INSERT INTO products (name, price, stock) VALUES {placeholders}",
flat_values)
conn.commit()
Why this matters: issuing N separate INSERT statements each pays its own parse + analyze overhead (~20 µs per string). A single multi-row string pays that cost once for all rows.
| Approach | Throughput |
|---|---|
| Multi-row VALUES (1 string, N rows) | 211K rows/s — recommended |
| N separate INSERT strings (1 txn) | ~35K rows/s — 6× slower |
| N separate autocommit INSERTs | ~58 q/s — 1 fsync per row |
BEGIN … COMMIT
block. This limits WAL growth per transaction while keeping throughput high. See
Transactions for Group Commit configuration,
which further improves concurrent write throughput.
Next Steps
- SQL Reference — Data Types — full type system
- SQL Reference — DDL — CREATE TABLE, indexes, constraints
- SQL Reference — DML — SELECT, INSERT, UPDATE, DELETE
- Transactions — BEGIN, COMMIT, ROLLBACK, MVCC
- Performance — benchmark numbers and tuning tips