ARK 2: Hardware

Blog ARK 27 Mar 2024

Brian Riordan

Making Choices

Following on from last month’s episode in the ARK series which can be found here , we now move on to the next part of our journey – hardware.

Choosing where you’re going to live is one of the biggest decisions people face in their lifetime. Aside from ticking all the boxes for what a dream home should incorporate, a lot of the decision rests upon individual circumstances. Are you a student looking to live as affordably as possible so as to avoid accruing too much debt? Are you a young, up and coming careerist, looking to get their foot on the property ladder? Are you and your partner looking to buy your first home together? Are you a husband and wife team with a couple of kids and one more on the way? The choices made by each of the examples above are unlikely to be the same, since the objectives, circumstances and requirements are all different. In a similar fashion, choosing where to build your application requires a fair amount of consideration, and depends much upon the nature of the application in question.  

The Simple Model

You’ve built your application and you need to put it somewhere. You decide upon a single server to house all the constituent parts of the application. This is the most simplistic way of getting your application up and running. Suddenly your application has roared into life, everything is in the one place so no need to worry about inter-server communications for the processes, and arguably best of all, it is relatively cost-effective. Is it all sunshine and rainbows though? Let’s look at the pros and cons of this setup. 


  • Pure and simple – everything in one place, no need to replicate file structures across a range of other servers, no need to deal with firewalls between different servers 
  • Cost-effective – no need to invest financially in a range of servers when 1 is able to do the job 
  • Easy to maintain – whether it be release deployments or monitoring tools, having only 1 host involved means distribution is quick and any work required only needs performed once 


  • Single point of failure – if the server crashes, everything crashes. Feeds, Realtime processes, historical processes will all be unavailable at the same time 
  • Server Specifics – it is often the case that a server will be fitted out with a particular purpose in mind, for example, to have maximum storage on disk. This can come at the cost of relatively reduced capabilities in RAM or CPU. Should the realtime aspect of your application prove to be more in demand than your historical side, you could find that the server does not perform as well as expected 
  • Limited growth prospects – to an extent, this is a limiting factor no matter which setup you adopt, but assuming your application is successful, there is a good chance, if not an expectation, that the application will grow, whether it be in the number of processes, or perhaps the amount of data transitioning through the system. There will likely come a time when 1 server just will not be enough to house everything 

Let’s Split

After some time being live with clients, it may become apparent that certain aspects of your application are utilised more than others. Maybe clients don’t care that much about realtime data and instead are happy to query on a T+1 basis, or perhaps it’s the opposite, with realtime data extracting being the most sought after feature. If everything is being housed in one place, there is a risk that heavy usage will slow the entire application down, by virtue of the fact the server itself is under pressure. This carries with it substantial risks, given we have a single point of failure. What can we do about this? Well, taking into account the cons mentioned above for the 1 server setup, particularly with regard to the point about how servers are built for specific purposes, let’s split our application across two hosts – one which will house the historical databases and related processes, and the other for housing the realtime and feed processes. Given the decision has been made, we can choose to the type of server to each job. For the historical database server, disk storage space is a priority. Conversely, for the realtime database host, disk storage is a less important factor when compared to the necessity of having sufficient memory to be able to cope with the growth of the realtime databases as the current day progresses, something that the historical databases won’t see. There is, of course, a cost to expanding the server field to 2. 


  • Freedom to choose the most appropriate servers for the types of processes they will house 
  • Protection against single point of failure. 1 server going down will not impact the running of the other server which is still up, and also that means there is the ability to continue running the application at a reduced service level, as opposed to complete failure like in the 1 server setup. 
  • Improved efficiency as a result of having different types of processes separated by host. Realtime based queries can all be routed to the realtime server, whereas historical based queries can be routed to the historical host. There is no reason, nor need, for a historical query to go anywhere near the realtime host or vice versa, leading to improved query times. 


  • Increased financial requirement, to both obtain and then maintain 2 servers instead of 1 
  • Increased release deployment duration is likely, as well as the need for monitoring tools to be set up on 2 servers instead of 1 
  • Increased workload on the people who have to support the running of the application 

Triple Lock

Have you ever been out at a restaurant and had the misfortune of being seated to a rocking table? With every lean, lifting of cutlery, placing of your glass, the table wobbles on its legs, and you swear it’s going to give way at some point. The source of all your woes in this situation? The table doesn’t have 3 legs. There is a reason why telescopes and cameras are generally set up on tripods, why music stands tend to have 3 legs coming from their central column, why the roofs of buildings tend to have a triangular cross section – stability and strength. We now have an application which is doing very well, has a large client base, with a wide client usage profile and we want to ensure that we can maintain a reliable service, and have multiple fallbacks in the event of an issue. So, let’s add a third host into the mix, and do some more splitting of the processes, so that we have one server dedicated to servicing the feed processes, one server dedicated to servicing the realtime processes, and one server dedicated to servicing the historical processes. In addition, let’s add a mirror realtime database process (known as a Write Database or WDB in our  TorQ framework) onto the historical server, whose purpose will be to capture the realtime data, and then save it down to the disk regularly throughout the day. Its positioning on the historical server makes sense since it has to be able to access the historical disk storage to be able to do its job, so whilst it retains the status of a realtime process, its purpose is entirely directed towards ensuring historical data integrity. 

The typical constituents of each server would be: 

  • Feed Server – feed handlers, tickerplants 
  • Realtime Server – Realtime databases, client gateways, custom last-value-cached type processes 
  • Historical Server – Historical Databases, Realtime Write Databases 

This setup adds another layer of stability to the application, since we can now suffer a failure of the realtime host without impacting the feed processes, which in turn will continue to supply the write databases, allowing for an uninterrupted record of the data from a historical point of view. A failure of the feed server will cause a loss of flow to both the realtime databases and the write databases, but they themselves will remain functional, allowing for client queries to be executed against the data already captured during the day.  


  • Very well protected setup against hardware failure. Any one server failure does not result in the majority of the application being affected 
  • Servers can be individually set up and tweaked to suit their specific purpose. For example, the feed server need not have huge memory or disk storage specifications, the realtime server should have powerful memory but not much need for disk storage, and the historical server likely only needs moderate memory to facilitate client queries, with its primary specification being given to disk storage. 
  • Depending on setup, a server requiring maintenance (including downtime) may not necessitate a full shut down of the application – the individual processes living on the server in question can be shut down, whilst leaving the others up and available for clients.  


  • Increased initial costs in procuring suitable servers, and subsequent routine maintenance 
  • More resources required to support the setup 
  • Since feed processes and realtime processes are now separated, should a realtime process need to replay a tickerplant log for example, it will have to copy it from the feed host to the realtime host, which will take time depending on how large the log is 


Clouds are the proverbial new kids in town. They offer a more flexible alternative to your traditional hardware setup whereby you can, in essence, pick and choose what features you would like to have at will. Unlike purchasing a server which will be somewhat rigid in its configuration from its inception, with a cloud you can add additional features/resources to it as your application expands, or if you find there are features which have been seldom utilised, these can be removed with relative ease. This flexibility saves time, costs, and allows the application owners to quickly react to any given need from their clients, when compared to dealing with traditional hardware. However, cloud setups are not infallible, and there are considerations to be made even with their benefits. 


  • Very flexible, application owners can modify existing cloud setups quickly and easily, depending on need 
  • Relatively inexpensive – no physical servers need purchased; you simply pay for what you use on the overall cloud setup 
  • Cloud setups automatically back-up the data online, and so give an extra layer of data redundancy 


  • Relies on internet connection. Whilst this is something we often take for granted in this day and age, in the event your internet connection is interrupted, so will your access to the cloud 
  • Whilst you have a lot of choice with how your cloud setup will look, you will be somewhat governed by the company who owns the cloud in terms of security choices, how certain attributes are applied etc, whereas with traditional hardware, once you own the server, you can choose security etc as you see fit. 



There is no perfect hardware solution for any given application setup. Much like choosing a place to live, it is about weighing up the pros and the cons, the attributes which are essential and those which are convenient, the necessary costs to be incurred versus the indulgences. The aim is to maximise the essentials, to enable a highly efficient and functioning application setup, whilst being mindful of the costs (both financial and practical) of creating such a setup. For that, one needs to have a vision of what the purpose of the application is going to be, how it will work in the here and now, and then consider how it may evolve in the future. Decisions can then be made as to how the hardware to support the application is arranged and best utilised. 

Here at Data Intellect, we have a wide range of expertise in setups, owing to our vast experience in working with many different projects globally. If you have any questions regarding your setup, or would like our team to assist you in building your perfect setup, don’t hesitate to get in touch here.

Stay tuned for our next episode in the ARK series which will be released next month!

Share this: