You might prefer a fixed size cache rather than a TTL, so that your cache's 
memory size never gets too big.  This is also simpler logic, using just a 
map to provide the cache, and a slice to constrain the cache size.  It's 
almost too short to be a library, see below. Off the top of my head -- not 
run, but you'll get the idea:

type Payload struct {
   Key string  // typical, but your key doesn't have to be a string. Any 
suitable map key will work. 
   pos int // where we are in Cache.Order
   Val  string // change type from string to store your data. You can add 
multiple elements after Val if you desire.
}

type Cache struct {
  Map map[string]*Payload
  Order []*Payload
  MaxSize int
}

func NewCache(maxSize int) *Cache {
    return &Cache{
        Map: make(map[string]*Payload),
        MaxSize: maxSize,
    }
}

func (c *Cache) Get(key string) *Payload {
    return c.Map[key]
}

func (c *Cache) Set(p *Payload) {
     v, already := c.Map[p.Key]
     if already {
           // update logic, may not be needed if key -> value mapping is 
immutable
          //  remove any old payload under this same key
           c.Order = append(c.Order[:v.pos], c.Order[v.pos+1:]) 
     }
     // add the new
     p.pos = len(c.Order)
     c.Order = append(c.Order, p)
     c.Map[p.Key] = p

     // keep the cache size constant
    if len(c.Order) > c.MaxSize {
           // deleted the oldest
           kill := c.Order[0]
           delete(c.Map, kill.Key)
           c.Order = c.Order[1:]
    }
}

If you really need a Time To Live and want to allow memory to balloon 
uncontrolled, then MaxSize would change from an int to a time.Time, and the 
deletion condition would change from being size based to being 
time.Time.Since() based.

Also look at sync.Pool if you need goroutine safety. Obviously you can just 
add a sync.Mutex to Cache and lock during Set/Get, but for heavy contention 
sync.Pool can perform better.


On Thursday, December 9, 2021 at 9:59:32 AM UTC-6 Rakesh K R wrote:

> Hi,
> In my application I have this necessity of looking into DBs to get the 
> data(read intensive application) so I am planning to store these data 
> in-memory with some ttl for expiry.
> Can someone suggest some in-memory caching libraries with better 
> performance available to suit my requirement?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/4978f98d-cba8-4bcc-b7e0-fc02c6ed9525n%40googlegroups.com.

Reply via email to