The Case Against Date Partitioning

Blog kdb+ 17 Oct 2024

Read time:
1 minutes

Jonny Press

Date partitioning is the usual model used in kdb+ historical (on-disk) databases. We believe that for some setups other options should be considered. This is particularly prevalent for cloud-based deployments such as those running under Amazon FinSpace with Managed kdb Insights, where moving data quickly to longer term storage leads to greater scalability and reduced complexity.

Use Cases

Design choices should be based on use cases. In the kdb+ domain there are two use cases that dominate:

  1. Research / Quant analytics
  2. Real time analytics

(1) usually requires access to historic (T+1) data only.

(2) can be further sub-divided:

  • a. mission critical analytics. These must always be on dedicated processes, rather than relying on shared resources
  • b. adhoc manual investigations
  • c. non-critical reporting or visualisation

If we consider different database structures, can we change how we approach these use cases?

Basic Real-Time Capture

BasicTickArch
kdb+ tick is the de-facto standard approach to collecting data in real-time into a kdb+ system. The majority of KX customers use this or an extended version (such as TorQ, KX Platform or an in-house framework) that uses largely the same architectural concepts. This entails saving data to a date-partitioned on-disk structure, where new data is added in new directories every 24 hours.

The partitioning scheme should be determined by how data arrives, and how it is accessed. kdb+ is very flexible and in addition to date, kdb+ supports month, year and int partitioning. In a kdb+ tick-esque system, date is usually optimal because the data set is large and grows on a daily basis (therefore date aligns with how the data arrives) and users usually want to query over a range of dates (therefore date aligns with how the data is accessed).

Intraday Database

TickWithIDB
A standard kdb+ tick setup holds the current day's worth of data in memory, and persists it to disk on a 24-hour cycle. When kdb+ was released in the early 2000s this was a perfectly sufficient approach, particularly because the majority of the early customers operated in the equities space which had significant close periods. Since then the volume of data captured by the average system has increased more than the cost of memory has reduced, and systems operating in 24-hour markets are commonplace. 

A common approach is to modify the capture architecture to include an "intraday" database. This persists the intraday data to disk, and serves it to users. This has the benefit of reducing the system memory requirement. However, the intraday database is usually structured differently for performance reasons and therefore is accessed differently from either the on-disk database or the in-memory database. It introduces additional complexity in terms of additional processes, query complexity, and coordination across processes.

Memory Usage

Plus accordion
Could lead to lower memory usage, without extending and complicating the tick capture architecture.

Partition Size

Plus accordion
More granular (smaller) partitions lead to easier management particularly if data needs to be adjusted. Partitions could also become "fixed" in size- during periods of lower data volumes the partitions span longer time ranges. The caveat is more partitions means more individual files, and more files that have to be accessed for a query spanning a particular time range.

Availability Of Data In a Single Place

Plus accordion
In any installation there will be multiple use cases which don't require real time data but still require "today's" data. Given that the data will be available directly within the on-disk structure in a more timely manner, it may be possible to serve these from a single, simple component rather than having to join data from multiple processes.

More granular partitioning could be done on a fixed period (such as an hour) or arbitrarily. Both have pros and cons.

The database must be integer partitioned. In the same way that a date partition works, the integer should represent the lower bound of the time period it contains. We could use the integer representation of the timestamp type, which would give us the ability to partition data with full granularity:

`long$2025.01.10D09:00:00.000000000
789814800000000000

However this would mean that we can’t have partitions for data prior to 2000, as kdb+ represents these with negative numbers which would cause issues with file system representation. Instead we could pick something human-readable, or a different integer time mapping, such as minutes from the Unix epoch.

In a similar way that date partitioning assumes one day of data per partitions, if we chose to have a fixed time period for each partition then the partitioning is explicit and it’s easy for all external users or applications to determine which partitions are to be accessed for the query they require. If we have variable sized partitions then it must be determined from within the database itself (or better – via a data access API) which partitions are to be accessed. Variable size partitions bring an extra level of customisability, for example rolling partitions more frequently during high volume periods to maintain similar size datasets per partition and therefore in-memory.

This is great, but all my data is already date partitioned...

On the plus side, you don’t have to re-shuffle the data into more granular partitions, you can just rename the partitions with an appropriate lower bounded time scheme. Once the renaming is done, you can start adding data in the more granular fashion. You will however have to re-write queries. It won’t be such a big job if all data is accessed via an API!

Some examples showing query equivalence:

// example date partitioned database
q)date
2025.01.01 2025.01.02 2025.01.03 2025.01.04 2025.01.05

q)select count i by sym from t
sym| x 
---| --
a  | 48
b  | 43
c  | 43

q)select count i by date from t
date      | x 
----------| --
2025.01.01| 28
2025.01.02| 27
2025.01.03| 29
2025.01.04| 29
2025.01.05| 21

// reconstruct the database into int partitions
q)int
28928160 28929600 28931040 28932480 28933920

// util functions for mapping between int partitions and timestamps
q)mintots:{1970.01.01D+x*0D00:01}
q)tstomin:{`int$(x - 1970.01.01D)%0D00:01}

q)select count i by sym from t
sym| x 
---| --
a  | 48
b  | 43
c  | 43

q)select count i by date:`date$mintots[last int] from t
date      | x 
----------| --
2025.01.01| 28
2025.01.02| 27
2025.01.03| 29
2025.01.04| 29
2025.01.05| 21

// split the last day of data into two partitions to check aggregations still work as expected
q)int
28928160 28929600 28931040 28932480 28933920 28934640
q)mintots -2#int
2025.01.05D00:00:00.000000000 2025.01.05D12:00:00.000000000

q)select count i by date:`date$mintots[last int] from t
date      | x 
----------| --
2025.01.01| 28
2025.01.02| 27
2025.01.03| 29
2025.01.04| 29
2025.01.05| 21

Use Cases Re-examined

If we consider moving to a more granular time partition, how could it impact the use cases listed at the start?

It improves the research use case. Data is available in a more timely manner.

For real time analytics, it depends. Mission critical use cases stay the same, with dedicated processes still required. Adhoc investigations and reporting applications may be simplified due to having to access data from less places, or in the best case may become purely historic data problems given that the historic data is now more recent.

Concluding

More granular time based partition may reduce both system complexity and resource requirements. Complexity is reduced due to fewer processes with less coordination required between them. It is possible to retrofit into existing systems (we’ve done it previously). It should be a consideration for greenfield installations, particularly those running under Amazon FinSpace with Managed kdb Insights, as it is easy to scale access to the historic database component. The Segmented Tickerplant in TorQ facilitates easy rolling of data on any time period. And we are always willing to help with our Architecture Review service!

For some nitty-gritty detail on hourly partitioning, this paper has some excellent examples. It introduces the concept of “de-fragmenting” the database which might be useful for old datasets. There are also some good thoughts here.

Share this:

LET'S CHAT ABOUT YOUR PROJECT.

GET IN TOUCH