Part 7by Muhammad

SQLite vs LittleFS on Embedded Devices: What to Use and Why

SQLite vs LittleFS on embedded devices

SQLite vs LittleFS on embedded devices is a common design decision when you need reliable storage for logs, configuration and local state. This comparison helps you pick the right approach based on flash constraints, power-loss risk, query needs and firmware complexity.

You will learn what each technology really provides, how they behave on microcontrollers (MCUs) and Linux-class embedded systems and how to map typical IoT workloads to the correct storage layer.

Table of Contents

SQLite vs LittleFS on embedded devices

This is not a strict apples-to-apples comparison because SQLite is a database engine and LittleFS is a filesystem. On embedded targets you often decide between:

  • Using LittleFS to store files (JSON config, CSV logs, binary ring buffers, protobuf frames)
  • Using SQLite on top of a filesystem (LittleFS on MCU flash, ext4/F2FS on eMMC, UBIFS on raw NAND) to get transactional tables, indexing and queries

The practical choice comes down to whether you need queries and transactional semantics or you mainly need durable blobs and simple append/read. Many products use both: LittleFS for firmware and config files plus SQLite for structured time-series or event data, when the platform can support it.

What each one is (and is not)

LittleFS (littlefs)

LittleFS is a small embedded filesystem designed for microcontrollers with NOR flash. Its goals include power-loss resilience and reasonable wear leveling with a simple API: open files, read, write, rename and sync.

What it gives you:

  • A POSIX-like file abstraction on top of raw flash via a block device layer
  • Power-loss resilience for metadata and file operations (with proper use of sync)
  • Wear leveling suitable for NOR flash patterns

What it does not give you:

  • SQL queries, indexes or relational constraints
  • Transactions across multiple logical records (unless you implement a journal)
  • Efficient “find all rows where X” unless you build your own indexing

SQLite

SQLite is an embedded relational database engine that stores the entire database in a single file. It provides SQL (Structured Query Language), indexing, ACID transactions (Atomicity, Consistency, Isolation and Durability) and multiple journaling modes (rollback journal, write-ahead log).

What it gives you:

  • Structured storage: tables, indexes, constraints
  • Queries: filtering, aggregation, joins, ordering
  • Transactions: all-or-nothing updates and crash recovery

What it does not give you:

  • A replacement for a filesystem on raw flash (it needs a file abstraction)
  • Guaranteed good performance on very small NOR flash without careful tuning

Quick decision guide

Use this as a first-pass filter. You will refine it with the later sections.

  • Choose LittleFS if you store a few configuration files, need simple logs, want minimal footprint or you have tight RAM/flash budgets.
  • Choose SQLite if you need queries (filtering by time, device, error code), must enforce schema or you want robust multi-record transactions.
  • Consider both if you have MCU flash for files plus an external storage (PSRAM + flash, QSPI flash, SD, eMMC) and you want structured event storage.

In many IoT devices, the real decision is: “Do I keep data as files and implement my own indexing, or do I pay the cost of SQLite to get it built-in?” That is the core of SQLite vs LittleFS on embedded devices.

Flash memory realities: wear leveling, erase blocks and power loss

Your storage choice should follow the physics of the medium:

  • NOR flash (common on MCUs): small pages, erase in larger blocks, limited program/erase cycles, random reads are fast, writes require erase management.
  • NAND flash (raw NAND): requires bad block management and error correction code (ECC). Usually handled by a dedicated stack (UBI/UBIFS) or by managed devices (eMMC, SD).
  • Managed flash (eMMC, SD, SSD): internal controller performs wear leveling and presents a block device. Filesystems like ext4/F2FS work well, SQLite is common.

Wear leveling

LittleFS includes wear leveling mechanisms appropriate for NOR flash. SQLite does not do flash wear leveling by itself. On MCU NOR flash you depend on the underlying filesystem and block device to spread erases. If you run SQLite on top of LittleFS you rely on LittleFS to distribute writes, but you still need to be careful because database workloads can rewrite the same pages frequently (indexes, freelist pages, metadata).

Erase block alignment and write amplification

Flash erase blocks might be 4 KB to 64 KB or more. If your workload repeatedly updates small pieces of data, you can trigger write amplification: rewriting much more flash than the logical update size.

  • LittleFS is optimized for small embedded updates but still pays overhead for metadata and copy-on-write behavior.
  • SQLite updates B-tree pages. Updates can touch multiple pages (table B-tree, index B-tree, journal/WAL). That can multiply flash writes unless you tune page size, journaling mode and transaction batching.

Power-loss behavior

Both systems can be power-loss resilient when used correctly, but the failure modes differ:

  • LittleFS: aims to keep filesystem metadata consistent. A file write that is not sync’d can lose recent data, but the volume should mount cleanly.
  • SQLite: aims for transaction-level atomicity. If you commit a transaction and the storage stack honors flush semantics, you should not end up with partial records. On weak flash layers that lie about flush, you can see corruption.

On some MCU ports, “fsync” semantics are approximated. That matters for SQLite more than for basic file logs.

Performance characteristics: latency, throughput and write amplification

Small writes

For small append-only logs, LittleFS with an append strategy often wins because you control the format and can write sequentially. SQLite can also be efficient if you batch inserts in a single transaction and use WAL mode, but it still maintains B-trees and may sync more often than your simple binary log.

Reads and queries

If you frequently need “last N events where code=7” or “average sensor value per hour” then SQLite usually wins by orders of magnitude because indexes and query planners avoid scanning full files.

Transaction batching

The single biggest SQLite performance knob on embedded systems is transaction batching. Doing one transaction per row is slow and flash-expensive. Doing one transaction per 100 or 1000 rows can be dramatically faster and reduce write amplification.

Concurrency

Many embedded devices have a single writer. SQLite supports multiple readers with a single writer (especially with WAL). LittleFS supports file operations but application-level locking is up to you.

Data modeling and query needs

This is where the “database vs filesystem” boundary is clearest.

LittleFS-friendly models

  • Configuration: a few files like /config.json, /wifi.txt and a device certificate
  • Firmware slots: OTA images stored as files
  • Append-only logs: a rolling log file, or a set of chunk files
  • Ring buffers: fixed-size binary files with head/tail pointers stored separately

SQLite-friendly models

  • Time-series events: timestamp, sensor_id, value, quality flags
  • Queues with state: messages to upload with retry counts, backoff timestamps and per-record status
  • Indexed lookups: “find by device_id”, “find by UUID”, “deduplicate by hash”
  • Aggregations: daily counts, min/max, percentile approximations (with custom logic)

The hidden cost of file-based “databases”

When you build your own record store on LittleFS, you also build:

  • Indexing or scan logic
  • Corruption detection (checksums)
  • Compaction and deletion handling
  • Crash recovery and atomic updates across multiple files

That engineering time is often underestimated in SQLite vs LittleFS on embedded devices.

Reliability and recovery behavior

LittleFS reliability model

LittleFS uses copy-on-write style techniques for metadata so the filesystem stays mountable after resets. For application data integrity, you still need to design your file formats. If you need atomic updates of structured data, you typically use:

  • Write-rename strategy: write a new file, fsync, then rename over the old file
  • Two-phase commit in files: write record blocks with sequence numbers and CRCs

SQLite reliability model

SQLite provides atomic commits via journaling. Two common modes:

  • Rollback journal: writes original pages to a journal file, then overwrites database pages
  • WAL (Write-Ahead Logging): appends changes to a WAL file, later checkpoints into the main DB

On embedded systems, WAL often performs better for write-heavy workloads, but it creates an additional file and requires periodic checkpointing. Rollback journal is simpler but can be slower due to more frequent full sync patterns.

Resource footprint: code size, RAM and CPU

Code size

  • LittleFS: typically tens of kilobytes, depending on configuration and platform glue
  • SQLite: can be hundreds of kilobytes to over a megabyte depending on features (SQL parser, pager, indexes, optional extensions)

If you run on an MCU with 512 KB to 2 MB of flash for firmware, SQLite may still fit but it competes with connectivity stacks (TLS, MQTT (Message Queuing Telemetry Transport), HTTP, BLE). Feature flags matter.

RAM

SQLite uses page cache memory plus per-query allocations. You can configure cache size, page size and memory allocator. LittleFS uses small buffers (read/program/cache/lookahead) and tends to be more predictable.

CPU

SQL parsing and B-tree maintenance costs CPU. If you store and query thousands of rows, SQLite’s CPU cost is often worth it because it avoids full-file scans. If you store a few dozen values, LittleFS plus a simple format wins.

Security considerations

  • At-rest encryption: SQLite does not include encryption by default. You need SQLCipher or a platform layer (dm-crypt, hardware encrypted storage). LittleFS also does not encrypt by default, you need an encrypted block device layer or per-file encryption.
  • Data integrity: add CRCs to file formats on LittleFS. SQLite provides internal consistency mechanisms, but you should still use PRAGMA integrity_check for diagnostics and consider application-level checksums for payload blobs.
  • Key storage: store device keys in secure elements or TrustZone when possible. A filesystem or database alone does not solve secret protection.

Integration patterns and architectures

Pattern 1: LittleFS only (files plus a ring buffer)

Best for MCUs that mainly buffer data for uplink.

  • Binary chunk files: log_0001.bin, log_0002.bin
  • Manifest file with last sent offset
  • CRC per record for partial writes

Pattern 2: SQLite on Linux embedded

Best for gateways and Linux-based edge devices. You run SQLite on ext4/F2FS and rely on mature fsync semantics.

  • Use WAL mode for high write rates
  • Use a periodic checkpoint timer
  • Back up the database by copying with the SQLite online backup API when needed

Pattern 3: SQLite on LittleFS (advanced)

This can work when you truly need SQL on an MCU and you have enough flash/RAM and stable fsync behavior. You must tune aggressively:

  • Batch inserts in transactions
  • Choose a page size compatible with flash geometry
  • Minimize indexes
  • Test power-loss scenarios repeatedly

Working code examples

Example 1: LittleFS atomic config update (C, POSIX-style)

This pattern protects you from partial writes: you write a new file, flush it, then rename it over the old one. Many embedded SDKs expose LittleFS via a POSIX-like VFS (Virtual File System) layer.

/* Atomically update a JSON config file on a LittleFS volume using write-then-rename. */
#include <stdio.h>
#include <string.h>
#include <errno.h>

static int write_file_fsync(FILE *f) {
    if (fflush(f) != 0) return -1;

    /* If your VFS exposes fsync, use it. On some embedded ports this is available as fsync(fileno(f)). */
    #if defined(__unix__) || defined(__APPLE__)
    if (fsync(fileno(f)) != 0) return -1;
    #endif

    return 0;
}

int littlefs_write_config_atomic(const char *path, const char *json) {
    char tmp_path[128];
    if (snprintf(tmp_path, sizeof(tmp_path), "%s.tmp", path) >= (int)sizeof(tmp_path)) {
        errno = ENAMETOOLONG;
        return -1;
    }

    FILE *f = fopen(tmp_path, "wb");
    if (!f) return -1;

    size_t n = fwrite(json, 1, strlen(json), f);
    if (n != strlen(json)) {
        fclose(f);
        return -1;
    }

    if (write_file_fsync(f) != 0) {
        fclose(f);
        return -1;
    }

    if (fclose(f) != 0) return -1;

    /* Rename is typically atomic at the filesystem level. */
    if (rename(tmp_path, path) != 0) {
        return -1;
    }

    return 0;
}

Notes for embedded targets: If your platform does not provide fsync, look for the filesystem-specific sync call (for example lfs_file_sync) via the native LittleFS API. Also confirm that rename is atomic in your VFS binding.

Example 2: SQLite WAL mode, batched inserts and indexed queries (Python)

This example runs on Linux-based embedded systems (Raspberry Pi, OpenWrt, Yocto) and demonstrates the two key embedded optimizations: WAL mode and batching inserts in a single transaction.

# Create an SQLite database for sensor events, enable WAL, batch inserts and query via an index.
import sqlite3
import time
import random

DB_PATH = "events.db"

con = sqlite3.connect(DB_PATH)
cur = con.cursor()

# Pragmas that commonly help on embedded Linux. Adjust synchronous based on your power-loss tolerance.
cur.execute("PRAGMA journal_mode=WAL;")
cur.execute("PRAGMA synchronous=NORMAL;")
cur.execute("PRAGMA temp_store=MEMORY;")

cur.execute("""
CREATE TABLE IF NOT EXISTS events (
  ts_ms INTEGER NOT NULL,
  sensor_id INTEGER NOT NULL,
  value REAL NOT NULL
);
""")
cur.execute("CREATE INDEX IF NOT EXISTS idx_events_sensor_ts ON events(sensor_id, ts_ms);")
con.commit()

# Batch insert 1000 rows in a single transaction.
now = int(time.time() * 1000)
rows = [(now + i, random.randint(1, 4), random.random() * 100.0) for i in range(1000)]

cur.execute("BEGIN;")
cur.executemany("INSERT INTO events(ts_ms, sensor_id, value) VALUES(?, ?, ?);", rows)
cur.execute("COMMIT;")

# Query recent data for one sensor using the index.
sensor = 2
since = now + 800
cur.execute(
    "SELECT ts_ms, value FROM events WHERE sensor_id=? AND ts_ms >= ? ORDER BY ts_ms ASC;",
    (sensor, since)
)
result = cur.fetchall()
print(f"rows={len(result)} first={result[0] if result else None}")

con.close()

Embedded tuning tips: For maximum durability use synchronous=FULL, but expect higher latency. If you use a UPS or supercap, NORMAL is often acceptable. Always test your own power-loss profile.

Side-by-side comparison table

DimensionLittleFSSQLite
What it isFilesystem for MCUs, power-loss awareEmbedded relational database in a file
Best atStoring files, configs, simple logsQueries, indexes, transactions, structured data
Typical platformMCU with NOR flashLinux embedded, RTOS with enough flash/RAM
Power-loss behaviorFilesystem consistency, app data depends on your formatTransaction-level atomicity with journaling/WAL
Wear levelingYes (designed for raw flash)No (relies on filesystem or storage layer)
Query capabilityNo (you implement parsing and indexing)Yes (SQL, query planner, indexes)
Implementation complexityLow to mediumMedium to high (tuning, pragmas, schema design)
FootprintSmallLarger
Common failure modeApplication-level log corruption without CRC or atomic update patternCorruption if flush semantics are broken or if storage stack is misconfigured

Recommendations by device class and workload

Battery-powered MCU sensors (tens to hundreds of KB per day)

  • Default: LittleFS with chunked binary logs, CRC per record
  • Why: lowest overhead, predictable RAM usage, you can control flash writes
  • When to consider SQLite: only if you must query local history often (not just upload)

Connected appliances with moderate storage (QSPI flash, external flash)

  • Default: LittleFS for config and OTA, optional SQLite for event store if you need local analytics
  • Key design point: batch writes and use a retention policy (delete old rows or rotate files)

Gateways and edge compute (Linux, eMMC/SSD)

  • Default: SQLite (or a heavier time-series DB if needed) on ext4/F2FS
  • Why: queries, aggregation, robust tooling, easy export and debugging
  • LittleFS role: typically none, unless you also have an MCU coprocessor

High write-rate logs (crash logs, metrics, traces)

  • If you mostly append and rarely query: prefer LittleFS-style file logs with rotation.
  • If you must slice and aggregate locally: SQLite with WAL and careful checkpointing.

For most intermediate embedded teams, the practical answer to SQLite vs LittleFS on embedded devices is: start with LittleFS when your data model is file-like, adopt SQLite when you can prove you need queries and transactional updates and you have the resources to support it.

Conclusion

LittleFS is the right baseline when you need a robust filesystem on MCU flash for configs, OTA images and simple logs. SQLite becomes the better choice when your device needs structured storage with indexing, filtering and transaction semantics. If you frame the problem as SQLite vs LittleFS on embedded devices and evaluate flash geometry, power-loss behavior and query requirements first, you can avoid both overengineering and brittle file-based “databases”.