We all know that the complexity of algorithms we use influences a lot the performance of our programs.The difference between having linear time algorithms and having constant time algorithms is seen quite soon, after the first hundreds of items.
So here’s a simple question. What’s the complexity of adding an element at the end of a linked list? The naive answer usually is ‘constant time’. But let’s check the usual implementations.
Let’s assume we have the structure:
This implementation is clearly NOT in constant time because, well, we have to start from the head, get all the way to the end of the list and add the element there. Time: O(n). At least. More later.
The obvious fix would be to also have the tail of the list. To make the tail available as well. So let’s expand our data structures with a ‘list’ structure that will contain the head and the tail.
Great! Problem solved! We have constant time for adding. Don’t we? Begginer’s luck, we’re safe at bay!
What is the execution time of malloc?
Good, now that we found out that it’s not so easy to do that, we can say goodbye to the idea that a dynamically allocated linked list does not have the possibility to add an element in O(1) simply because we have to allocate the element. In academia it’s a good and nice O(1) solution, in RealLife™ it’s not. So what do we do?
The solutions I found so far are not perfect by far. First solution would be to have something like a smart pool that will give us the elements in a more or less constant time (one idea would be to have a buffer of more node structures, and then one could perform a circular search from the last ‘allocated’ node). Another solution would be to not do dynamic allocation at all, but that defeats the purpose of having a linked list in the first place.
One reminder. Never forget that just because you have a one liner doesn’t mean that the execution time for that line is constant. And memory allocation can be one of the most critical operation for the performance of your program