kdb+ IoT Database Structure

Blog Data Analytics & Engineering 20 Mar 2018

Data Intellect

When implementing a kdb+ database the partitioning structure is a key decision point. kdb+ provides four in-built ways to partition database tables: by year, month, date or integer. Only one partitioning scheme may be used. The best option is usually based on two factors:

  • how data arrives
  • how it is accessed

The partitioning scheme should facilitate easy addition of data (but not necessarily easy modification) and should allow the main query patterns to be executed efficiently.

The traditional domain of kdb+ is for storage of financial market data. kdb+ is very flexible and modifications can and usually are made, but the standard partitioning scheme in finance is:

  • partition by date
  • apply a parted index to “sym” (where sym is the ticker/name of the financial instrument). The field selected to index on is usually one which is used extensively for filtering the data. The column with the parted index applied must have elements of the same value stored contiguously
  • data is time sorted within each sym (i.e. data is not globally chronologically sorted, but is sorted for an individual instrument)

A kdb+ system usually stores intra-day (real time) data to an in-memory component commonly called the Real Time Database (RDB). Data is flushed on a 24 hour cycle to the Historical Database (HDB). The date/sym partitioning model is a good fit because:

  • date partitions line up with the flush cycle
  • data is usually collected live with sub second latency, meaning we do not get circumstances where we receive market data from several days previously which would have to be written to “old” partitions
  • the usual query pattern is to access a small set of syms across a small set of dates, or perhaps to access all the data for a single date

We have done several implementations in the IoT space and found that the date/sym approach usually does not apply. In finance, data collection is from a small number of sources (markets) which stream data at low latency. Connectivity failures are relatively rare and can usually be manually recovered (i.e. a date partition which is missing data can be rebuilt). In IoT the number of devices may be into the millions, and connectivity failures to individual devices common. In addition, IoT data may arrive many days late (e.g. a transmitting device on a vehicle which goes out of mobile network range for a number of days). IoT necessitates a different structure.

A good choice for an IoT database is usually to partition by integer, where integer represents the device ID (we wrote a bit more about this here). It is probably reasonable to assume that the data for an individual device arrives in chronological order. If the data is partitioned by integer (device) then date is probably a good choice to apply the index on as the secondary lookup. This gives similar access characteristics to the finance use case, but in a structure which is more suited to how the data arrives- data for an individual device can arrive many days late, or be recovered independently, and will only effect the partition for that individual device. By way of comparison, if we chose a date partitioned scheme and half a day of data for a single device arrived three days late, then the whole date partition would have to be re-written (new data appended to end, sorted on disk, index applied).

However, we can take the integer partition scheme one step further and use it to encode more information than just the device ID, essentially adding an extra dimension (or dimensions). Integer partitions are represented with long ints, which are signed 64 bit values. We can’t use negative values for partitions, so we have 63 bits to play with for encoding. Lets assume for the sake of argument that we want to encode two values into the partition: the timestamp to the nearest hour, and the device ID.

For our example, we need to be able to store 100 years of data, from 2000 -> 2100 and partitioned at one hour intervals. 100 years of data, 365 days per year and 24 hours per day means we need to encode time as an int with range up to at least 876000 (24*365*100). This would require 20 bits (2^20 being 1048576), and leave 43 bits for encoding the device ID. We can create encode and decode functions like this (these are just examples, and not particularly well tested!):

encode:{[id;timestamp](id*1048576)+sum 24 1*`int$`date`hh$\:timestamp}

q)encode[765;2015.01.01D15:23:36]
802292151

decode:{(x div 1048576;2000.01.01D01*x mod 1048576)}

q)decode[802292151]
765
2015.01.01D15:00:00.000000000

We can create a sample table, save it to disk, and access it as a standard HDB:

// create some sample data
q)n:20;t:([]time:2018.04.01 + asc n?0D03;device:n?873 234 343)

// add the integer encoding
q)update int:encode[device;time] from t
time device int
----------------------------------------------
2018.04.01D00:02:30.344220735 343 359821528
2018.04.01D00:10:21.271677967 873 915566808
2018.04.01D00:15:42.193813249 234 245526744
2018.04.01D00:16:18.374770190 343 359821528
2018.04.01D00:18:26.386787071 234 245526744
2018.04.01D00:19:33.770091868 343 359821528
2018.04.01D00:29:17.875083480 234 245526744
2018.04.01D00:35:02.229912020 873 915566808
2018.04.01D00:41:34.916567020 873 915566808
2018.04.01D00:46:05.510341711 234 245526744
2018.04.01D00:49:26.831604391 343 359821528
2018.04.01D01:20:57.095089182 234 245526745
2018.04.01D01:51:39.255548324 343 359821529
2018.04.01D02:03:55.536688473 873 915566810
2018.04.01D02:08:40.681764744 873 915566810
2018.04.01D02:11:00.809736326 343 359821530
2018.04.01D02:15:21.710380632 873 915566810
2018.04.01D02:27:11.750816199 234 245526746
2018.04.01D02:36:04.784210454 873 915566810
2018.04.01D02:47:52.421385068 873 915566810

// save down based on the int
q){x:.Q.en[`:hdb;update int:encode[first device;0D01 xbar first time] by device,0D01 xbar time from x]; 
 {[x;I] .Q.par[`:hdb;I;`$"t/"] set delete int from select from x where int=I}[x] each exec distinct int from x}[t]
`:hdb2/359821528/t/`:hdb2/915566808/t/`:hdb2/245526744/t/`:hdb2/245526745/t/`..

// start an hdb session
q hdb

// count the partitions
q)select count i by int from t
int | x
---------| -
245526744| 4
245526745| 1
245526746| 1
359821528| 4
359821529| 1
359821530| 1
915566808| 3
915566810| 5

// build a mapping of int to device and time
q)intmap:([]int:int),'flip `device`time!flip decode int
q)intmap
int device time
----------------------------------------------
245526744 234 2018.04.01D00:00:00.000000000
245526745 234 2018.04.01D01:00:00.000000000
245526746 234 2018.04.01D02:00:00.000000000
359821528 343 2018.04.01D00:00:00.000000000
359821529 343 2018.04.01D01:00:00.000000000
359821530 343 2018.04.01D02:00:00.000000000
915566808 873 2018.04.01D00:00:00.000000000
915566810 873 2018.04.01D02:00:00.000000000

// retrieve all data for a given device
q)select from t where int in exec int from intmap where device=234
int time device
----------------------------------------------
245526744 2018.04.01D00:15:42.193813249 234
245526744 2018.04.01D00:18:26.386787071 234
245526744 2018.04.01D00:29:17.875083480 234
245526744 2018.04.01D00:46:05.510341711 234
245526745 2018.04.01D01:20:57.095089182 234
245526746 2018.04.01D02:27:11.750816199 234

// retrieve all data for a given period
q)select from t where int in exec int from intmap where time=2018.04.01D01:00:00.000000000
int time device
----------------------------------------------
245526745 2018.04.01D01:20:57.095089182 234
359821529 2018.04.01D01:51:39.255548324 343

If we encode a timestamp value and the device into the partition we also still have the parted index available to use on another field if required.

We can partition on time more or less granularly, each having their own pros and cons. More granularity means:

  • queries over longer time ranges may execute slower
  • look ups to specific points in time are quicker
  • more flexibility as to how the data is indexed (with less granular indexing on date is probably necessary)
  • the number of partitions and files to be managed may become unwieldy
  • individual partitions are smaller and quicker to update if necessary (i.e. easier to reset index)
  • may provide more flexibility around how data is moved from the in-memory component to the on-disk i.e. not bound to a daily roll

In summary, there are several factors to consider when choosing the database structure, but integer partitioning probably suits best for IoT type applications. If you are working on an IoT application with kdb+ and would like some help please get in touch!

Share this:

LET'S CHAT ABOUT YOUR PROJECT.

GET IN TOUCH