Data Intellect

It has been quite a puzzle to figure out how much memory a kdb+ object uses within a q session, so we decided to write this utility script and add it to the latest release of our TorQ Framework. My first naïve impression was why not just use `-22!`

, that should be a good estimate. Um no, `-22!`

is only returning the byte size of uncompressed serialized version of the object (pretty much the same as how much space it takes when saving the object down to disk without compression).

```
q)-22!til 10000000
80000014
q)`:test set til 10000000
`:test
q)hcount `:test
80000016
```

But how much memory does this object actually take? Let’s see…

```
// this function measures the memory usage before and after
// object creation and returns the difference
q){s:.Q.w[]`used;a:til 10000000;.Q.w[][`used]-s}[]
134217744
```

134MB vs 80MB?? How come there is a huge difference? Where does this number come from? How is it calculated? What is going on? This is where it gets interesting. kdb+ actually allocates memory to its objects in the power of 2. If we quickly convert the former result to its nearest power of 2, we can get a very good approximation.

```
q)"j"$2 xexp ceiling 2 xlog -22!til 10000000
134217728
```

This is just a very simple example. Even though it looks like we can just use `-22!`

and get the closest power of 2 to approximate memory size of kdb+ objects, that is not the case when the object gets complicated, such as nested objects, attributes, mixed types, tables, dictionaries, etc.

In this article, I’ll be explain the approach bit by bit. Also please note that the result of the function is not the returnable memory size, since that would need to take the reference count into consideration. So if you create an object by referencing to another object (i.e. `a:1 2;b:a`

), deleting that object won’t return the amount of memory expected by the function since the other object is still using that reference.

Now, let’s talk kdb!

Atoms are allocated to a fixed size of 16 bytes regardless of type, except for GUID which takes 32 bytes.

```
q){s:.Q.w[]`used;a:1f,100000#1b;.Q.w[][`used]-s}[]-1048576 // boolean
1600032f
q){s:.Q.w[]`used;a:1f,100000#1j;.Q.w[][`used]-s}[]-1048576 // long
1600032f
q){s:.Q.w[]`used;a:1f,100000#"G"$"1";.Q.w[][`used]-s}[]-1048576 // GUID
3200032f
```

The reason for the extra subtraction is due to the mixed list I created. I’ll explain further a bit later, but for now, we subtract the size of pointers of the mixed list to get to the raw size of the atoms. The size of pointers is calculated using `2 xexp ceiling 2 xlog 16+100001*8`

.

For a simple list, the memory size formula is straightforward (we can refer back to the example at the start):

```
// c-count, s-data type size, a-attribute overheads
calcsize:{[c;s;a] `long$2 xexp ceiling 2 xlog 16+a+s*c}
vectorsize:{calcsize[count x;typesize x;attrsize x]}
```

It sums all the data type sizes in the list, adds an extra 16 byte and attribute overhead and returns the nearest power of 2. The data type size can be found here and this is what we are going to use:

`typesize:{4^0N 1 16 0N 1 2 4 8 4 8 1 8 8 4 4 8 8 4 4 4 abs type x}`

The `4^`

is to handle the enumerated list where the pointers are 4 bytes each.

```
q)sym:-10000?`4
q)sym2:-10000?`4
q)type `sym?1000000?sym
20h
q)type `sym2?1000000?sym2
21h
q){s:.Q.w[]`used;a:`sym?1000000?sym;.Q.w[][`used]-s}[]
4194320
q)calcsize[1000000;4;0]
4194304
q){s:.Q.w[]`used;a:`sym2?1000000?sym2;.Q.w[][`used]-s}[]
4194320
q)calcsize[1000000;4;0]
4194304
```

I’ll explain the attribute overhead later. Let’s see how we can handle a complex list.

A complex list can be a mix of atoms/lists, different data types, and multi-nested objects within a list. To calculate its size, we need to include the pointer size (8 bytes) of each element in the list plus the memory size of each item in the list. Now the formula looks like such:

```
objsize:{
if[not count x;:0];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
76h<t;0;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
};
```

There are some optimizations here which may or may not be applicable. We put all the filters for atoms, simple lists and types over 76h at the top and manage the conditions for mixed type (0h) lists beneath.

- If a mixed list contains another mixed list, we have to loop through the list and calculate each item individually
- If the type of each element of the list is the same and they are all simple lists then we can get the count of each inner list and use the type to calculate the total size. There is an optimisation here around attributes- if the list is of size > 1000 (an arbitrary number) then we don’t bother calculating attribute size and assume to be 0. This is to account for the case where we are checking the size of a table with many rows and nested (e.g. string) columns. It would be very expensive to check the attribute size on every element, so we assume to be 0. We could perhaps further optimize and not bother to count each element, but we’ve left the count in for now
- For any other mixed type lists we loop through each element

Lets play around with this formula without the attribute overhead:

```
q)attrsize:{0}
q){s:.Q.w[]`used;a:0N 2#til 100000;.Q.w[][`used]-s}[]
2124528
q)objsize 0N 2#til 100000
2124288
q){s:.Q.w[]`used;a:til[10000],(1000#1;0N 2#.Q.a);.Q.w[][`used]-s}[]
299824
q)objsize til[10000],(1000#1;0N 2#.Q.a)
299808
```

So far so good! Now lets look at a more interesting section, attribute overheads.

There is good documentation here at section 42 concerning how much overhead an attribute requires. But those values are only applicable for 2.x kdb+ version. For 3.x kdb+ version, we need to use double the size. So some sort of a switch between version is very useful. Translating all this information into a function, this is what we have:

```
version:.5*1+3.0<=.z.K;
attrsize:{version*$[`u=a:attr x;32*count distinct x;
`p=a;8+48*count distinct x;
0]};
```

Let’s test this out in 2.x and 3.x to see whether I’m lying!!

```
q)(.z.k;.z.K)
2014.05.03
2.8
q){s:.Q.w[]`used;a:`u#til 100000;.Q.w[][`used]-s}[]
2097168j
q)objsize `u#til 100000
2097152j
q){s:.Q.w[]`used;a:`p#raze 1000#'til 100;.Q.w[][`used]-s}[]
524304j
q)objsize `p#raze 1000#'til 100
524288j
q)(.z.k;.z.K)
2015.07.09
3.3
q){s:.Q.w[]`used;a:`u#til 100000;.Q.w[][`used]-s}[]
4194320
q)objsize `u#til 100000
4194304
q){s:.Q.w[]`used;a:`p#raze 1000#'til 100;.Q.w[][`used]-s}[]
1048592
q)objsize `p#raze 1000#'til 100
1048576
```

See I’m being very honest 😀 But we are still missing the ``g#`

attribute. For that, we add the size of the raw list and a grouped dictionary of the list together.

```
q){s:.Q.w[]`used;a:`g#100000#.Q.a;.Q.w[][`used]-s}[]
984432
q)objsize[100000#.Q.a]+objsize group `g#100000#.Q.a
984352
```

Note that the size of `group `g#100000#.Q.a`

and `group 100000#.Q.a`

are actually quite different as the former key has a ``u#`

attribute compared to the latter.

```
q)key group `g#100000#.Q.a
`u#"abcdefghijklmnopqrstuvwxyz"
q)key group 100000#.Q.a
"abcdefghijklmnopqrstuvwxyz"
```

Taking this into consideration, the function is modified to:

```
objsize:{
if[not count x;:0];
if[`g=attr x;x:(`#x;group x)];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
76h<t;0;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
};
```

Now let’s check out tables and dictionaries.

So how do we deal with tables/dictionaries? What structure do they have? They are made up of keys and values. So if we flatten the object into a list containing keys and values, we can calculate the size quite easily. Keyed or unkeyed table won’t affect the calculation since we end up flipping them into a dictionary and taking the value.

```
objsize:{
if[not count x;:0];
x:$[.Q.qt x;(key x;value x:flip 0!x);
99h=type x;(key x;value x);
x];
if[`g=attr x;x:(`#x;group x)];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
76h<t;0;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
};
```

Examples are always good!

```
q){s:.Q.w[]`used;a:`a`b!(til 1000000;1000000?0 1 2);.Q.w[][`used]-s}[]
16777296
q)objsize `a`b!(til 1000000;1000000?0 1 2)
16777312
q){s:.Q.w[]`used;a:([]a:til 1000000;b:1000000?0 1 2);.Q.w[][`used]-s}[]
16777312
q)objsize ([]a:til 1000000;b:1000000?0 1 2)
16777312
q){s:.Q.w[]`used;a:1!([]a:til 1000000;b:1000000?0 1 2);.Q.w[][`used]-s}[]
16777488
q)objsize 1!([]a:til 1000000;b:1000000?0 1 2)
16777312
q){s:.Q.w[]`used;a:([]a:`p#raze 100#'til 10000;b:1000000?0 1 2;c:0N 2#2000000?0 1 2);.Q.w[][`used]-s}[]
65554560
q)objsize ([]a:`p#raze 100#'til 10000;b:1000000?0 1 2;c:0N 2#2000000?0 1 2)
65554592
```

It has been quite a journey to uncover how kdb+ uses memory. The main factor of how it works is that memory is used in powers of 2 (the power of 3 does not set you free here). The calculation is an approximation, and there are probably still some cases that are not covered by this function, but it should be sufficient for a start 😉 I’m very excited to see what the community will think of this. Any new ideas to improve or comments are welcome! Hope you enjoyed reading this article! Until next time.

Share this: