OpenCAS Maintainers & Commercial Partners

High-performance block storage caching for modern data centers

We build, maintain, and commercially support OpenCAS — the open-source framework for accelerating block storage with intelligent caching. From kernel-level engineering to enterprise integration.

Since 2016 Developing CAS technology
BSD-3 Open-source core license
Global Customer reach

The team behind OpenCAS

Unvertical is a Poland-based company founded by the core maintainers of the Open Cache Acceleration Software (OpenCAS) project. We've been building CAS technology since 2016 — first inside Intel, then as the lead open-source maintainers and commercial partners for organizations worldwide after Intel CAS reached end-of-life in 2023.

Our expertise sits at the intersection of Linux kernel development, storage systems architecture, and NVM technology. We know this codebase intimately because we wrote it, and we understand the workloads it serves because we've deployed it across production environments worldwide.

We operate on an open-core model: the foundational caching engine remains fully open source under the BSD-3 license, while we offer enterprise tooling, professional services, and custom development to organizations that need production-grade support and integration.

Kernel-Level Expertise

Deep experience with Linux kernel storage subsystem, out-of-tree module development, and performance optimization at the block layer.

🔓

Open Source First

Core caching engine (OCF) and Linux adapters are and will remain open source. We build trust through transparency.

🌍

Global Reach, European Roots

Based in Poland, serving data center operators and enterprise teams across Europe, North America, and Asia-Pacific.

Production Ready · Open Source

OpenCAS

Open Cache Acceleration Software — a mature, battle-tested block-level caching framework that accelerates backend storage using higher-performance local media.

Accelerate your storage, transparently

OpenCAS sits between your applications and backend storage devices, caching frequently accessed data on faster local media (NVMe SSDs, Intel Optane, or any high-performance block device). It operates at the block level as a Linux kernel module, which means it's transparent to applications — no code changes, no API integrations, no filesystem dependencies.

At the core is the Open CAS Framework (OCF), a high-performance caching meta-library written in C. OCF is modular and embeddable: it powers both the Linux kernel adapter (Open CAS Linux) and an SPDK block device adapter for user-space storage applications.

OpenCAS supports multiple cache modes (write-through, write-back, write-around, pass-through), flexible I/O classification by block size, file path, directory, or metadata type, and ships with a comprehensive test framework for validation and CI/CD integration.

The entire project is licensed under BSD-3 — no vendor lock-in, no usage restrictions, no surprises.

Key Capabilities

  • Block-level caching — transparent to all applications and filesystems
  • Multiple cache modes: write-through, write-back, write-around, pass-through
  • Flexible I/O classification by block size, file, directory, or metadata
  • Linux kernel module with active maintenance for current kernel versions
  • SPDK adapter for user-space storage applications
  • Modular OCF core — embeddable in custom storage stacks
  • Comprehensive test framework and CI/CD tooling
  • BSD-3 license — fully open, no restrictions

Open core, honest boundaries

The caching engine is free and open source. Premium tools and professional services are how we sustain the project and deliver enterprise-grade reliability.

Open Source

BSD-3 License · Free forever
  • Open CAS Framework (OCF) — the core caching library
  • Open CAS Linux — kernel adapters and CLI management tools
  • SPDK block device adapter
  • Test framework and CI/CD automation
  • Community documentation and getting-started guides

Premium

Subscription · Per-node licensing
  • Metadata migration tools between OpenCAS versions
  • Integration layers for enterprise storage platforms
  • Advanced telemetry and monitoring dashboards
  • Pre-built Prometheus/Grafana monitoring stack
  • Priority bug fixes and security patches

From evaluation to production

Whether you need help evaluating OpenCAS, integrating it into a custom kernel, or tuning it for your workload — we're the team that wrote it.

01

Consulting

Architecture review and workload analysis for your storage stack. We assess whether caching is the right approach, recommend configuration strategies, and design integration plans for your infrastructure.

02

Configuration Validation

OpenCAS runs as an out-of-tree kernel module. We build, test, and certify releases for your specific kernel version, OS distribution, and hardware — including custom or modified kernels.

03

Custom Development

Need a new integration layer for your storage platform? A custom caching policy? We develop features in the core engine and build dedicated plugins for your environment.

04

Premium Support

SLA-backed technical support with guaranteed response times. Direct access to the engineers who maintain the codebase. Priority issue resolution and root cause analysis.

05

Training

Hands-on programs for your infrastructure team. Covers OpenCAS architecture, deployment best practices, performance tuning, troubleshooting, and operational runbooks. On-site or remote.

06

Benchmarking & PoC

We run OpenCAS on your actual workloads, produce detailed performance reports, and help you make informed decisions with real data — not vendor marketing numbers.

Where OpenCAS makes the difference

OpenCAS serves environments where the latency or throughput of accessing backend storage directly limits business outcomes.

🖥

Virtualization Platforms

KVM/QEMU, Proxmox, OpenStack — accelerate virtual disk I/O with transparent block-level caching. No guest OS modifications required.

🗄

Database Acceleration

Reduce query latency for databases running on SAN or shared storage. Effective for mixed OLTP/OLAP workloads where hot dataset fits in cache.

🏢

HCI & Software-Defined Storage

Storage vendors can embed OCF directly into their products. OEM licensing available for ISVs building HCI or SDS solutions.

📊

SAN & Direct-Attached Acceleration

Cache frequently accessed blocks from slower SAN or DAS backends on local NVMe. Transparent to applications, no storage rebuild needed.

Ready to accelerate your storage?

Whether you need a proof of concept, expert guidance on your caching strategy, or ongoing production support — let's talk.

Early Access · Proof of Concept

NVMCC

NVM Cache for Compute — next-generation caching middleware that replaces expensive DRAM page cache with cost-effective NVM in disaggregated storage architectures.

Trade DRAM cost for NVM capacity

Modern data centers separate compute from storage for scalability, but this creates a latency problem. The standard fix — buffering data in DRAM page cache — is expensive and getting more so.

NVMCC takes a fundamentally different approach.

Observe the mismatch

In disaggregated storage, DRAM used for caching is vastly overprovisioned in speed relative to the remote backend it fronts. That performance headroom is wasted money.

Replace with cheaper media

NVM (SLC NAND) is ~25× cheaper per GB than DDR5 but still orders of magnitude faster than remote storage. More capacity means higher hit ratio, which means better aggregate performance.

Use the spare bandwidth

Latency-bound workloads can't saturate network links to storage nodes. NVMCC uses that idle bandwidth for speculative prefetching onto local NVM — trading bandwidth for latency.

Why this matters now

DRAM costs are rising sharply as manufacturers shift capacity from DDR5 to HBM for AI workloads. A technology that reduces DRAM dependency addresses an increasingly urgent need.

~25×
Cost difference
DDR5 at ~$25/GB vs SLC NAND at ~$1/GB. More cache capacity for less money.
+50%
DRAM price increase
Spot prices rose ~50% in Q4 2025, with contract prices projected to follow at 45-50% QoQ.
30-50%
DRAM share of server CAPEX
Memory is one of the largest line items in server costs. Reducing it has outsized impact on TCO.

The first NVM-first compute-side cache

Every existing caching solution treats NVM as a secondary tier behind DRAM. NVMCC inverts this hierarchy — NVM is the primary caching layer, not an afterthought.

NVM-First Cache Engine

Purpose-built for NVM characteristics: read/write asymmetry, higher latency than DRAM, I/O granularity. Not a DRAM engine with NVM bolted on.

📡

Bandwidth-Aware Prefetching

Real-time analysis of I/O patterns and network utilization. Proactively fetches data onto local NVM using spare link bandwidth — possible only with NVM's capacity advantage.

🧠

Minimal DRAM Footprint

Redesigned metadata structures to minimize the engine's own DRAM consumption. The whole point is to reduce memory costs — the engine itself can't be the problem.

🔌

Transparent Block-Level Integration

Plugs in as a middleware layer compatible with Ceph, KVM/QEMU, Proxmox, HDFS, and distributed databases. No application changes required.

Built for disaggregated storage

NVMCC is designed for environments where compute and storage are separated — the architecture pattern where its NVM-first approach delivers the most value.

Distributed Storage Clusters

Ceph, NFS-over-RDMA, NVMe-oF — reduce latency from remote storage nodes by caching and prefetching on local NVM at the compute layer.

🖥

Virtualization with Remote Storage

KVM/QEMU, Proxmox, OpenStack — accelerate VM disk I/O transparently while cutting the DRAM budget allocated to host page cache.

🗄

Databases on Shared Storage

Reduce query latency and DRAM costs for databases running against remote or shared storage backends. Particularly effective for read-heavy and mixed workloads.

📊

Big Data & Analytics

HDFS, Spark, data lakes — cache iteratively accessed datasets locally on NVM. Speculative prefetching is especially effective for sequential scan patterns.

Help shape the future of compute-side caching

NVMCC is currently at proof-of-concept stage. We're looking for 2–3 design partners with disaggregated storage environments who want to validate the technology on real workloads and influence the product roadmap.

The maintainers are the vendor

You're not buying from a company that packages someone else's open-source project. You're working directly with the people who architect, maintain, and evolve the technology.

10+

Years on CAS

Building caching technology since 2016 — from Intel CAS to OpenCAS and now NVMCC.

0

Layers of Indirection

Your support tickets are handled by the people who write the code. No outsourced L1 chains.

BSD-3

No Lock-In

The core is open source. You can fork, audit, or self-support. We earn your business on quality.

2

Products, One Stack

OpenCAS and NVMCC share the OCF foundation. Expertise in one deepens the other.

Let's discuss your infrastructure

We respond to all serious inquiries within one business day. Whether you're exploring OpenCAS for production, applying for the NVMCC early access program, or want to discuss a partnership — reach out and we'll match you with the right engineer.

📍
Poland, EU · Serving customers globally