Use sync.Map when you need a concurrent map for read-heavy workloads or when keys are short-lived, but prefer a standard map protected by a sync.Mutex for write-heavy scenarios or when you need to iterate over all keys. sync.Map optimizes for specific access patterns by using internal locking strategies that avoid blocking all goroutines on a single lock, whereas a mutex-protected map is generally faster for frequent updates.
Here is a practical example of using sync.Map for a cache where reads dominate:
package main
import (
"fmt"
"sync"
)
func main() {
var cache sync.Map
var wg sync.WaitGroup
// Write goroutines
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
cache.Store(id, fmt.Sprintf("value-%d", id))
}(i)
}
// Read goroutines
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
if val, ok := cache.Load(id); ok {
// Process val
}
}(i)
}
wg.Wait()
fmt.Println("Cache operations completed safely.")
}
If your workload involves frequent updates or you need to delete keys and iterate over the entire map, a standard map with a mutex is often more performant and predictable:
package main
import (
"fmt"
"sync"
)
type SafeMap struct {
mu sync.RWMutex
m map[int]string
}
func NewSafeMap() *SafeMap {
return &SafeMap{m: make(map[int]string)}
}
func (s *SafeMap) Store(key int, val string) {
s.mu.Lock()
defer s.mu.Unlock()
s.m[key] = val
}
func (s *SafeMap) Load(key int) (string, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.m[key]
return v, ok
}
func main() {
s := NewSafeMap()
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
s.Store(id, fmt.Sprintf("val-%d", id))
}(i)
}
wg.Wait()
}
Choose sync.Map only if you have a specific pattern where it shines: high concurrency with mostly reads, or keys that are added once and rarely deleted. For general-purpose thread-safe maps, the mutex approach is usually simpler to reason about and faster for mixed workloads.