Extending TorQ On Amazon FinSpace with Managed kdb Insights

Blog kdb+ 27 Jun 2024

Andrew Morrison

Through our continued collaboration with AWS and Kx, TorQ has been successfully integrated with Amazon FinSpace with Managed kdb Insights.

In our most recent release there have been several changes to TorQ to support both new and additional existing features in Managed kdb Insights. Firstly, A new cluster type has been introduced, the Ticker-Plant (TP) cluster. As such we have added a TP process to our Managed kdb Insights TorQ configuration. We have also added a Write-Down Database (WDB) process to optimise the save down process. Lastly, the Historical Database cluster configuration has been altered.

In this blog we will explain these changes, and how we (and others) can now leverage even more of the power of TorQ with Managed kdb Insights.

Getting Setup- The basics

For a comprehensive explanation of TorQ with Managed kdb Insights see our previous blog . If you are interested in migrating a tick architecture to Managed kdb Insights, you may find our compatibility scanner  useful too.

A standard TorQ installation usually consists of two parts. In order to launch a TorQ framework on AWS Finspace you will need to download the following code repositories:

Full startup instructions can be found in the TorQ Amazon FinSpace Starter Pack documentation.


Scaling Groups 

For a dedicated cluster, each node or kdb process in the cluster runs on its own dedicated compute host. For a cluster on a scaling group, a single set of compute is shared by multiple workloads (clusters). 

With the introduction of scaling groups, the original Amazon FinSpace Managed kdb cluster launch configuration is now referred to as a dedicated cluster, with scaling groups becoming the default for all cluster types except HDB (which can be started as either a dedicated cluster or part of a scaling group).  

Shared Storage Volumes  

By associating a volume with a cluster, you can leverage the managed storage capabilities of AWS, ensuring reliable, scalable, and high-performance storage for your data needs. 

Volumes are managed storage within your Managed kdb Insights environment that can be linked to clusters to store data such as Ticker-Plant (TP) logs, Real-Time Database (RDB) save-down files, and temporary storage for General Purpose (GP) clusters. 


TorQ is currently configured to use a dedicated cluster for the HDB. However, it is possible to include the HDB in a scaling group.

If you are configuring your HDB to be part of a scaling group, you will need to generate a data-view instead of cluster-specific disk cache to store database data for high-performance look up. 

Data-views can be accessed from HDB and General purpose (GP) type clusters for read only access. The data within a data-view is accessible from the cluster as a kdb segmented database that is automatically configured when you associate the data-view with the cluster. 


The architecture diagram above details the new cluster configuration.

When setting up a TP cluster, you need to specify a volume to hold the TP logs. For an RDB or GP cluster, you can designate a volume for save-down or temporary files. You can simplify management by having multiple clusters share a single volume, or you can configure multiple volumes and associate them with specific clusters for workload isolation. Your RDB will share the shared volume with the TP and have read only access to replay the TP logs in the event of disaster recovery.

As stated previously, we have added a Write-down database (WDB) process. The WDB lifts the burden of persisting to disk from the RDB and allows it to focus on answering queries. This is particularly useful in a kdb+ system on Managed kdb Insights, as clusters (processes) are temporarily unavailable during a save-down. Having a separate WDB process carry out this responsibility allows the RDB to be available to query during the save down process. This is entirely optional, you can continue to use the RDB as the save-down process and omit the WDB cluster creation.

The cluster itself is set up much the same as an RDB cluster and should be part of your scaling group and associated with a shared volume.

With dedicated clusters, temporary data is written to the scratch directory, a predefined, temporary storage location usually set up in the Amazon FinSpace notebook environment or the underlying Amazon Elastic File System (EFS) that Amazon FinSpace utilises for storage.

When using a shared volume, during the save-down process, the WDB (or RDB) will use the volume associated with it to temporarily hold data for the period between when the cluster has flushed the data from memory and when the data is successfully loaded into a Managed kdb Insights database.

Following the successful creation of a new change set and loading of the new data into the Managed kdb Insights database, the data held in the volume will be deleted. Throughout this process, the volume remains active and usable for other clusters. We have implemented a clean-up function that will ensure the temporary data that is stored either in the scratch directory or shared volume, is automatically deleted post save-down.


In the MVP release, the end of day process followed a typical on-premise procedure, in which there was a single HDB that reloaded post save-down. Following the successful creation of a change-set, which contained the day’s data, the HDB cluster was reloaded and pointed towards the newly written change-set to ensure the data was not stale.

The HDB would  load it’s files and clear the RDB tables in preparation for the next day. However, it would be temporarily unavailable for querying due to being in a CREATING state. This meant we could potentially have a time period with missing data. By bringing up a second HDB, we are able to maintain the historical data in the old HDB until the new HDB is ready for querying (which triggers the deletion of the old HDB), ensuring data integrity and continuity. This is detailed in the diagram below. At a high level, the process is as follows:

  1. WDB (or RDB) logs the completion of the creation of a new change set to the cloud watch log.
  2. Metric filters for the CloudWatch logs monitor for the regex of the log message (setup via Terraform).
  3. Upon detection of this regex, a counter is raised, which is utilised by a built-in alarm, to trigger an AWS Lambda which creates a new HDB cluster with the same properties as the ‘old’ cluster.

Once the HDB is successfully started and registered with the gateway (and thus ready for querying), the gateway deregisters the old gateway and calls the AWS Lambda for cluster deletion to delete the old HDB.


Looking Forward

The introduction of Scaling groups provides a new dimension of customisation to Managed kdb Insights, enabling the fine-tuning of an architecture and even more cost-saving opportunities.

We will be looking at the options around the optimal utilisation of burst compute- specifically ways to optimise the end-of-day write whilst minimising cost.

Another new addition to TorQ is the support for integer partitioned systems. Our processes and gateway apis for client queries have been revamped to fully support querying across both date and integer partitioned systems. This opens the door for additional flexibility to be incorporated into TorQ on Amazon FinSpace with Managed kdb Insights. Keep an eye out for a future blog post with more details on daily vs integer partitions.

Share this: