Database Support

Comprehensive guide to database-specific features, configurations, and best practices for SQLite, MySQL, PostgreSQL, and MongoDB.

Overview

Akron ORM provides unified access to multiple database systems while respecting their unique characteristics. This guide covers database-specific features, configuration options, and optimization strategies for each supported backend.

🗄️

SQLite

File-based

🐬

MySQL

Relational

🐘

PostgreSQL

Advanced SQL

🍃

MongoDB

Document

🗄️ SQLite

SQLite is perfect for development, testing, and small to medium applications. It requires no separate server process and stores data in a single file.

Connection Options

SQLite Connection Examples
1from akron import Akron
2
3# File-based database (recommended for production)
4db = Akron("sqlite:///./myapp.db") # Relative path
5db = Akron("sqlite:////tmp/app.db") # Absolute path
6db = Akron("sqlite:///C:/data/app.db") # Windows path
7
8# In-memory database (perfect for testing)
9db = Akron("sqlite:///:memory:")
10
11# Context manager for automatic cleanup
12with Akron("sqlite:///app.db") as db:
13 # Database operations here
14 pass # Connection closed automatically

Features & Limitations

✅ Strengths

  • • Zero configuration required
  • • Single file storage
  • • ACID transactions
  • • Cross-platform compatibility
  • • Excellent for development/testing
  • • Built into Python standard library

⚠️ Considerations

  • • Limited concurrent write access
  • • No user management/authentication
  • • File locking can be an issue
  • • Less suitable for high-traffic applications
  • • No network access (local only)

SQLite-Specific Examples

SQLite Best Practices
1# Enable WAL mode for better concurrency (optional)
2import sqlite3
3
4# Manual SQLite optimizations (if needed)
5def optimize_sqlite_connection(db_path):
6 conn = sqlite3.connect(db_path)
7 conn.execute("PRAGMA journal_mode=WAL")
8 conn.execute("PRAGMA synchronous=NORMAL")
9 conn.execute("PRAGMA cache_size=1000")
10 conn.execute("PRAGMA temp_store=MEMORY")
11 conn.close()
12
13# Akron usage remains the same
14db = Akron("sqlite:///optimized.db")
15
16# Create indexes for better performance
17db.raw_sql("CREATE INDEX IF NOT EXISTS idx_user_email ON users(email)")
18db.raw_sql("CREATE INDEX IF NOT EXISTS idx_post_user_id ON posts(user_id)")
19
20# Batch inserts are efficient in SQLite
21users_data = [
22 {"username": f"user{i}", "email": f"user{i}@example.com"}
23 for i in range(1000)
24]
25
26for user_data in users_data:
27 db.insert("users", user_data)

🐬 MySQL

MySQL is a popular relational database perfect for web applications and medium to large-scale projects requiring robust data integrity and concurrent access.

Connection & Setup

MySQL Configuration
1# Install MySQL connector
2# pip install mysql-connector-python
3
4from akron import Akron
5
6# Basic connection
7db = Akron("mysql://username:password@localhost:3306/database_name")
8
9# Connection with options
10db = Akron("mysql://user:pass@host:3306/db?charset=utf8mb4&autocommit=true")
11
12# SSL connection (production recommended)
13db = Akron("mysql://user:pass@host:3306/db?ssl_disabled=false")
14
15# Create database first (using MySQL CLI or admin tool):
16# CREATE DATABASE myapp CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
17
18# Example connection with error handling
19try:
20 db = Akron("mysql://user:password@localhost:3306/myapp")
21 print("Connected to MySQL successfully")
22
23 # Test connection
24 db.raw_sql("SELECT VERSION() as version")
25
26except Exception as e:
27 print(f"MySQL connection failed: {e}")
28 # Handle connection errors

MySQL-Specific Features

MySQL Optimizations
1# Use AUTO_INCREMENT for primary keys
2db.create_table("users", {
3 "id": "int", # Will become AUTO_INCREMENT PRIMARY KEY
4 "username": "str",
5 "email": "str",
6 "created_at": "str"
7})
8
9# MySQL supports larger VARCHAR sizes
10db.create_table("articles", {
11 "id": "int",
12 "title": "str", # VARCHAR(255) by default
13 "content": "str", # Can store large text
14 "author_id": "int->users.id"
15})
16
17# Leverage MySQL's JSON support (MySQL 5.7+)
18db.create_table("user_preferences", {
19 "id": "int",
20 "user_id": "int->users.id",
21 "preferences": "str" # Store JSON as string
22})
23
24# Insert JSON data
25db.insert("user_preferences", {
26 "user_id": 1,
27 "preferences": '{"theme": "dark", "language": "en", "notifications": true}'
28})
29
30# MySQL-specific indexing
31db.raw_sql("CREATE INDEX idx_email ON users(email)")
32db.raw_sql("CREATE INDEX idx_username ON users(username)")
33db.raw_sql("CREATE INDEX idx_created_at ON users(created_at)")
34
35# Use EXPLAIN to optimize queries
36results = db.raw_sql("EXPLAIN SELECT * FROM users WHERE email = 'test@example.com'")
37print("Query execution plan:", results)

✅ MySQL Strengths

  • • Excellent performance for read-heavy workloads
  • • Strong community and ecosystem
  • • Good replication and clustering options
  • • ACID compliance
  • • JSON support (5.7+)
  • • Mature and stable

⚠️ Considerations

  • • Requires separate server setup
  • • Case sensitivity can vary by platform
  • • Some advanced SQL features missing
  • • License considerations for commercial use
  • • Memory usage can be high

🐘 PostgreSQL

PostgreSQL is an advanced relational database with extensive SQL compliance, custom data types, and powerful features for complex applications.

Connection & Setup

PostgreSQL Configuration
1# Install PostgreSQL adapter
2# pip install psycopg2-binary
3
4from akron import Akron
5
6# Standard connection
7db = Akron("postgres://username:password@localhost:5432/database_name")
8
9# Connection with schema specification
10db = Akron("postgres://user:pass@host:5432/db?sslmode=require")
11
12# Create database first (using psql or admin tool):
13# CREATE DATABASE myapp WITH ENCODING 'UTF8';
14
15# Advanced connection example
16import os
17
18DB_CONFIG = {
19 "host": os.getenv("DB_HOST", "localhost"),
20 "port": os.getenv("DB_PORT", "5432"),
21 "user": os.getenv("DB_USER", "postgres"),
22 "password": os.getenv("DB_PASSWORD"),
23 "database": os.getenv("DB_NAME", "myapp")
24}
25
26connection_url = f"postgres://{DB_CONFIG['user']}:{DB_CONFIG['password']}@{DB_CONFIG['host']}:{DB_CONFIG['port']}/{DB_CONFIG['database']}"
27
28try:
29 db = Akron(connection_url)
30 print("Connected to PostgreSQL successfully")
31except Exception as e:
32 print(f"PostgreSQL connection failed: {e}")

PostgreSQL Advanced Features

PostgreSQL Capabilities
1# PostgreSQL supports advanced data types and features
2db.create_table("users", {
3 "id": "int", # SERIAL PRIMARY KEY in PostgreSQL
4 "username": "str", # VARCHAR
5 "email": "str",
6 "metadata": "str", # Can store JSON
7 "created_at": "str" # TIMESTAMP
8})
9
10# Leverage PostgreSQL's JSON/JSONB support
11db.create_table("user_profiles", {
12 "id": "int",
13 "user_id": "int->users.id",
14 "profile_data": "str" # Store as JSON string
15})
16
17# Insert complex JSON data
18profile_data = {
19 "personal": {
20 "firstName": "John",
21 "lastName": "Doe",
22 "age": 30
23 },
24 "preferences": {
25 "theme": "dark",
26 "language": "en",
27 "notifications": {
28 "email": True,
29 "push": False
30 }
31 }
32}
33
34db.insert("user_profiles", {
35 "user_id": 1,
36 "profile_data": json.dumps(profile_data)
37})
38
39# PostgreSQL indexing strategies
40db.raw_sql("CREATE INDEX idx_users_email ON users(email)")
41db.raw_sql("CREATE INDEX idx_users_username ON users(username)")
42
43# Use PostgreSQL's full-text search capabilities
44db.raw_sql("CREATE INDEX idx_posts_content_fts ON posts USING gin(to_tsvector('english', content))")
45
46# Advanced PostgreSQL queries
47search_results = db.raw_sql("""
48 SELECT id, title, content
49 FROM posts
50 WHERE to_tsvector('english', content) @@ plainto_tsquery('database optimization')
51""")
52
53# Use PostgreSQL's EXPLAIN ANALYZE for performance tuning
54execution_plan = db.raw_sql("EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com'")

✅ PostgreSQL Strengths

  • • Advanced SQL compliance
  • • Excellent JSON/JSONB support
  • • Custom data types and functions
  • • Full-text search capabilities
  • • Strong consistency and ACID compliance
  • • Extensible with plugins
  • • Open source with active development

⚠️ Considerations

  • • More complex setup than SQLite
  • • Higher memory usage
  • • Slower for simple read operations vs MySQL
  • • More configuration options to optimize
  • • Case-sensitive by default

🍃 MongoDB

MongoDB is a document-based NoSQL database perfect for applications requiring flexible schemas, horizontal scaling, and rapid development cycles.

Connection & Setup

MongoDB Configuration
1# Install MongoDB driver
2# pip install pymongo
3
4from akron import Akron
5import json
6
7# Basic MongoDB connection
8db = Akron("mongodb://localhost:27017/myapp")
9
10# MongoDB with authentication
11db = Akron("mongodb://username:password@localhost:27017/myapp")
12
13# MongoDB Atlas (cloud) connection
14db = Akron("mongodb+srv://user:pass@cluster.mongodb.net/myapp")
15
16# Connection with options
17db = Akron("mongodb://localhost:27017/myapp?authSource=admin&ssl=true")
18
19# MongoDB connection with error handling
20try:
21 db = Akron("mongodb://localhost:27017/myapp")
22
23 # Test connection
24 db.raw_query("users", {}) # Simple find operation
25 print("Connected to MongoDB successfully")
26
27except Exception as e:
28 print(f"MongoDB connection failed: {e}")
29 print("Make sure MongoDB is running on localhost:27017")

MongoDB Document Operations

Working with Documents
1# MongoDB is schemaless, but Akron provides structure
2db.create_table("users", {
3 "id": "int",
4 "username": "str",
5 "email": "str",
6 "profile": "str" # JSON document as string
7})
8
9# Insert document with nested structure
10user_document = {
11 "username": "alice",
12 "email": "alice@example.com",
13 "profile": json.dumps({
14 "personal": {
15 "firstName": "Alice",
16 "lastName": "Smith",
17 "age": 28,
18 "location": {
19 "city": "San Francisco",
20 "country": "USA"
21 }
22 },
23 "preferences": {
24 "theme": "dark",
25 "notifications": True,
26 "tags": ["python", "database", "web"]
27 },
28 "social": {
29 "twitter": "@alice_dev",
30 "github": "alice-smith"
31 }
32 })
33}
34
35user_id = db.insert("users", user_document)
36
37# MongoDB excels at flexible document structures
38db.create_table("posts", {
39 "id": "int",
40 "author_id": "int",
41 "title": "str",
42 "content": "str",
43 "metadata": "str" # Store rich metadata as JSON
44})
45
46# Insert post with rich metadata
47post_data = {
48 "author_id": user_id,
49 "title": "Getting Started with MongoDB",
50 "content": "MongoDB is a powerful document database...",
51 "metadata": json.dumps({
52 "tags": ["mongodb", "database", "nosql"],
53 "readTime": 5,
54 "difficulty": "beginner",
55 "lastModified": "2024-01-15T10:30:00Z",
56 "views": 0,
57 "likes": 0,
58 "comments": []
59 })
60}
61
62db.insert("posts", post_data)
63
64# Query documents (Akron translates to MongoDB queries)
65python_posts = db.find("posts") # Gets all posts
66users_from_sf = db.find("users", {"username": "alice"})
67
68# MongoDB aggregation through raw queries
69aggregation_pipeline = [
70 {"$match": {"author_id": user_id}},
71 {"$group": {"_id": "$author_id", "post_count": {"$sum": 1}}}
72]
73
74author_stats = db.raw_aggregation("posts", aggregation_pipeline)

✅ MongoDB Strengths

  • • Flexible, schema-less documents
  • • Excellent horizontal scaling
  • • Rich query language and aggregation
  • • Built-in sharding and replication
  • • Great for rapid prototyping
  • • JSON-native storage
  • • High performance for read/write operations

⚠️ Considerations

  • • No ACID transactions across documents (pre 4.0)
  • • Eventual consistency model
  • • No foreign key constraints
  • • Can lead to data duplication
  • • Memory usage can be high
  • • Complex joins are challenging

Performance Comparison

FeatureSQLiteMySQLPostgreSQLMongoDB
Setup ComplexityVery LowMediumMediumMedium
Read PerformanceHighVery HighHighVery High
Write PerformanceMediumHighHighVery High
Concurrent UsersLowHighHighVery High
ACID ComplianceYesYesYesLimited
Horizontal ScalingNoLimitedLimitedExcellent
Schema FlexibilityLowLowMediumHigh

Choosing the Right Database

🗄️ Choose SQLite When:

  • Building prototypes or small applications
  • Need zero-configuration database
  • Single-user or low-concurrency scenarios
  • Desktop applications or mobile apps
  • Development and testing environments
  • Read-heavy applications with infrequent writes

🐬 Choose MySQL When:

  • Building web applications with high read loads
  • Need proven reliability and performance
  • Working with existing MySQL infrastructure
  • Require good replication and backup tools
  • Team has MySQL expertise
  • Budget-conscious projects (open source)

🐘 Choose PostgreSQL When:

  • Need advanced SQL features and compliance
  • Working with complex data relationships
  • Require custom data types or functions
  • Building data-intensive applications
  • Need full-text search capabilities
  • Prioritize data integrity and consistency

🍃 Choose MongoDB When:

  • Schema requirements change frequently
  • Need to scale horizontally across servers
  • Working with JSON/document-based data
  • Building real-time applications
  • Rapid prototyping and development
  • Content management or catalog systems

Migration Between Databases

Akron's unified API makes it easier to migrate between different database systems:

Database Migration Example
1# Original SQLite database
2source_db = Akron("sqlite:///source.db")
3
4# Target PostgreSQL database
5target_db = Akron("postgres://user:pass@localhost:5432/target_db")
6
7def migrate_data(table_name, schema):
8 # Create table in target database
9 target_db.create_table(table_name, schema)
10
11 # Extract all data from source
12 data = source_db.find(table_name)
13
14 # Insert into target database
15 for record in data:
16 target_db.insert(table_name, record)
17
18 print(f"Migrated {len(data)} records from {table_name}")
19
20# Define schemas for migration
21schemas = {
22 "users": {"id": "int", "username": "str", "email": "str"},
23 "posts": {"id": "int", "user_id": "int->users.id", "title": "str", "content": "str"}
24}
25
26# Migrate each table
27for table_name, schema in schemas.items():
28 migrate_data(table_name, schema)
29
30# Close connections
31source_db.close()
32target_db.close()
33
34print("Migration completed successfully!")

Next Steps

Now that you understand database-specific features:

API Reference

Learn all available methods and their usage

→ explore API methods

CLI Tools

Use command-line tools for database management

→ learn CLI commands

Getting Started

Quick start guide with practical examples

→ view quick start guide