Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the LRUCache implementation is using Array over the Doubly Linked List with Map? #700

Open
kuldeepsingh-d11 opened this issue Feb 18, 2024 · 8 comments

Comments

@kuldeepsingh-d11
Copy link

Is there any specific reason for using Array instead of Doubly Linked List implementation.
Right now the time complexities for get function is O(n), and not sure about the set function complexity as unshift functions complexity can be O(n) or O(1) depending on the JS engine.

@markerikson
Copy link
Contributor

Because it was copied from another package, relatively simple code, and relatively small bundle size.

@kuldeepsingh-d11
Copy link
Author

but the Doubly linked List code is not very large, and it is also simple code

class LRUCache {
    constructor(capacity) {
        this.capacity = capacity;
        this.cache = new Map();
        this.head = { key: null, value: null, prev: null, next: null };
        this.tail = { key: null, value: null, prev: this.head, next: null };
        this.head.next = this.tail;
    }
    get(key) {
        if (!this.cache.has(key)) return -1;

        const node = this.cache.get(key);
        this.moveToHead(node);
        return node.value;
    }

    put(key, value) {
        if (this.cache.has(key)) {
            const node = this.cache.get(key);
            node.value = value;
            this.moveToHead(node);
        } else {
            const newNode = { key, value, prev: this.head, next: this.head.next };
            this.head.next.prev = newNode;
            this.head.next = newNode;
            this.cache.set(key, newNode);

            if (this.cache.size > this.capacity) {
                const tailKey = this.removeTail();
                this.cache.delete(tailKey);
            }
        }
    }

    moveToHead(node) {
        node.prev.next = node.next;
        node.next.prev = node.prev;
        node.prev = this.head;
        node.next = this.head.next;
        this.head.next.prev = node;
        this.head.next = node;
    }

    removeTail() {
        const key = this.tail.prev.key;
        this.tail.prev.prev.next = this.tail;
        this.tail.prev = this.tail.prev.prev;
        return key;
    }
}

@markerikson
Copy link
Contributor

markerikson commented Feb 18, 2024

Reworking this isn't a priority right now, but if you'd like to submit that as a PR we can take a look.

Note that one issue with your implementation is that it uses a Map and expects a single key, whereas our LRU implementation has an entire set of arguments and does equality checks on that.

@kuldeepsinghborana
Copy link

kuldeepsinghborana commented Feb 19, 2024

@markerikson
The LRU implementation with a doubly linked list and hashmap can not be achieved,
As the library provides a custom equality check comparator, hashmap works with an exact value match (reference/value).

I tried but 2 tests are failing because of it.
#701

@markerikson @aryaemami59, Do you have any idea, if we can achieve this?

@aryaemami59
Copy link
Contributor

@kuldeepsinghborana I can take a look, just out of curiosity are you trying to modify the exisiting implementation for lruMemoize or are you trying to create a new standalone mamoize function?

@kuldeepsinghborana
Copy link

@aryaemami59 I am trying to modify the existing implementation

@aryaemami59
Copy link
Contributor

aryaemami59 commented Feb 19, 2024

@kuldeepsinghborana Ok I'll take a look.

@EskiMojo14
Copy link
Contributor

fwiw if it's not possible to replicate the existing behaviour with a linked list, I'm sure we'd be open to a new memoiser instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants