Go's built-in slice and map operations generally offer O(1) performance for basic access and modification, but slice growth and map resizing can trigger O(n) costs due to underlying memory reallocation. Understanding these hidden costs is critical when processing large datasets or writing performance-sensitive code.
For slices, indexing (s[i]), appending to a pre-allocated capacity, and getting the length are all O(1). However, append is amortized O(1) but can spike to O(n) when the underlying array is full and must be reallocated and copied. To avoid this, pre-allocate slices when you know the size.
// O(n) risk: Repeated appends without capacity trigger reallocations
items := make([]string, 0)
for i := 0; i < 10000; i++ {
items = append(items, fmt.Sprintf("item-%d", i))
}
// O(1) amortized: Pre-allocate to avoid reallocation spikes
items = make([]string, 0, 10000) // Capacity set upfront
for i := 0; i < 10000; i++ {
items = append(items, fmt.Sprintf("item-%d", i))
}
For maps, lookups (m[key]), insertions, and deletions are O(1) on average. In the worst case (hash collisions), these degrade to O(n), though Go's hash function makes this rare in practice. Map resizing during insertion is also O(n) but amortized O(1) over many operations.
m := make(map[string]int)
// O(1) average: Insertion and lookup
m["key"] = 42
val, ok := m["key"]
// O(n) worst-case scenario: Deleting a key triggers a resize if load factor drops
delete(m, "key")
Key Takeaways:
- Slice Access: O(1).
- Slice Append: Amortized O(1); O(n) on reallocation.
- Map Lookup/Insert/Delete: Amortized O(1); O(n) worst-case (collisions/resizing).
- Slice Copy: O(n) for the number of elements copied (e.g.,
s = s[:n]is O(1), butcopy(dst, src)is O(len(src))).
Always pre-allocate slices when the size is known to prevent O(n) reallocation spikes in loops. For maps, ensure your keys have good distribution to minimize collision risks, though Go's standard library handles this well for most use cases.