Performance
53 articles
Algorithmic Complexity (Big O) of Go Built-In Operations (slice, map)
Go's built-in slice and map operations generally offer O(1) performance for basic access and modification, but slice growth and map resizing can trigger O(n) costs due to underlying memory reallocation.
Benchmark code
Write benchmark functions with the `Benchmark` prefix and the signature `func BenchmarkXxx(b *testing.B)`, then run them using `go test -bench=.
How Devirtualization Works in Go
Go devirtualization optimizes interface calls by replacing indirect dispatch with direct function calls using static analysis or runtime profiling data.
How Fast Is Go Compared to Other Languages
Go offers near-C++ execution speed with faster compilation than C++ and significantly outperforms interpreted languages like Python and Ruby.
How Inlining Works in the Go Compiler
Inlining replaces function calls with their code bodies to improve performance, controlled by the compiler's heuristics or the -l flag.
How Link-Time Optimization Works in Go
Go does not support traditional Link-Time Optimization; use -ldflags and inlining directives for performance tuning.
How the Go Garbage Collector Works
Go uses a concurrent, non-blocking tri-color mark-and-sweep garbage collector to automatically manage memory and reclaim unused objects.
How the Go Garbage Collector Works (Tricolor Mark and Sweep)
Go uses a concurrent tricolor mark-and-sweep garbage collector to automatically reclaim unused memory with minimal pause times.
How to Analyze CPU Profiles in Go
Generate a CPU profile with go test -cpuprofile and analyze it using go tool pprof to identify performance bottlenecks.
How to Analyze Memory (Heap) Profiles in Go
Generate and analyze Go heap profiles using go tool pprof to identify memory allocation hotspots and leaks.
How to Avoid Common Performance Pitfalls in Go
Fix Go performance issues by tuning GODEBUG flags and configuring HTTP transport settings for optimal connection reuse.
How to Detect and Fix Memory Leaks in Go
Detect Go memory leaks by enabling the Address Sanitizer with -asan or analyzing heap profiles using go tool pprof.
How to Inspect Compiler Optimizations in Go
Inspect Go compiler optimizations by running go tool compile with the -m flag to see optimization decisions or -S for assembly output.
How to Monitor GC Performance in Go
Monitor Go GC performance by setting GODEBUG=gctrace=1 or using the runtime/metrics package to track pause times and heap usage.
How to Optimize Concurrent Go Programs
Reduce lock contention and garbage collection overhead in Go by using sync.Mutex for critical sections and sync.Pool for object reuse.
How to Optimize Database Access in Go
Optimize Go database access by configuring connection pools, using prepared statements, and batching queries to reduce latency.
How to Optimize JSON Encoding/Decoding in Go
Speed up Go JSON processing by reusing buffers and pre-allocating memory to reduce runtime allocations.
How to Optimize String Concatenation in Go
Use strings.Builder instead of the + operator to efficiently concatenate strings in Go without excessive memory allocation.
How to Preallocate Slices for Performance in Go
Preallocate Go slices using make with a capacity argument to reduce memory reallocations and improve performance.
How to Profile a Go Program with pprof
Profile a Go program by recording CPU usage to a file with runtime/pprof and analyzing it with go tool pprof.
How to Profile and Optimize HTTP Servers in Go
Enable Go HTTP server profiling by importing net/http/pprof and exposing endpoints to capture and analyze CPU usage.
How to Profile Compile Times in Go
Profile Go compile times by running go build with -gcflags to see timing breakdowns for each compilation phase.
How to Reduce Binary Size of Go Programs
Use the -ldflags="-s -w" and -trimpath flags with go build to strip debug info and reduce binary size.
How to Reduce GC Pressure in Go
Reduce Go GC pressure by minimizing heap allocations and reusing objects via sync.Pool.
How to Reduce Go Binary Size (ldflags, UPX, strip)
Shrink Go binaries by stripping debug symbols with ldflags and compressing with UPX.
How to Reduce Memory Allocations in Go
Reduce Go memory allocations by reusing buffers with sync.Pool, avoiding string concatenation, and using the arena package for bulk memory management.
How to Tune the Go Garbage Collector (GOGC, GOMEMLIMIT)
Set GOGC to adjust GC frequency or GOMEMLIMIT to cap memory usage in Go applications.
How to Tune the Go Garbage Collector with GOGC
Tune Go garbage collection frequency by setting the GOGC environment variable to a percentage target.
How to Use benchstat to Compare Benchmark Results
Use benchstat to statistically compare Go benchmark results by running tests multiple times and piping the output files into the tool for analysis.
How to Use Compiler Directives (//go:noinline, //go:nosplit)
Use //go:noinline and //go:nosplit directives before a function to control compiler inlining and stack growth behavior.
How to Use CPU and Memory Profiles in CI
Use the runtime/pprof package to generate CPU and memory profiles in Go, then analyze them with go tool pprof in your CI pipeline.
How to Use Escape Analysis in Go (go build -gcflags="-m")
Run `go build -gcflags="-m"` to enable escape analysis output, which tells you whether variables are allocated on the stack or heap.
How to Use GOMEMLIMIT (Go 1.19+)
Set GOMEMLIMIT via environment variable or runtime/debug.SetMemoryLimit to cap Go application memory usage.
How to Use go tool pprof for Performance Profiling
Use go tool pprof to analyze CPU and memory profiles and identify performance bottlenecks in Go applications.
How to Use go tool trace for Execution Tracing
Use go tool trace to visualize execution traces generated by runtime/trace or GODEBUG settings.
How to Use go tool trace for Goroutine and Latency Analysis
Use go tool trace with a trace file to visualize goroutine scheduling and latency for performance debugging.
How to Use net/http/pprof for Live Profiling
Import net/http/pprof to expose profiling endpoints at /debug/pprof/ for live performance analysis.
How to Use runtime.ReadMemStats in Go
Use runtime.ReadMemStats to get current memory allocation stats like total allocs and GC counts.
How to Use sync.Pool to Reduce GC Pressure
Reduce GC pressure by caching reusable objects in a sync.Pool and retrieving them with Get() instead of allocating new instances.
How to Write Effective Benchmarks in Go
Write effective Go benchmarks by ensuring your `Benchmark` functions run the target code inside a loop controlled by `b.N`, avoiding premature optimization, and using `b.ResetTimer()` to exclude setup costs.
Interface Performance in Go: Cost of Dynamic Dispatch
Dynamic dispatch in Go adds runtime overhead due to method lookup and prevents inlining, making concrete types faster for performance-critical code.
JSON Performance in Go: encoding/json vs json-iterator vs sonic
Compare Go JSON libraries: use encoding/json for safety, json-iterator for easy speed gains, or sonic for maximum performance.
Performance Comparison: net/http vs Gin vs Echo vs Fiber
net/http is standard, Fiber is fastest, and Gin/Echo offer balanced performance; benchmark your specific workload to decide.
Performance Cost of Reflection in Go
Reflection in Go is significantly slower than static code because it requires runtime type inspection, so avoid it in performance-critical paths.
Performance Implications of Cgo
Cgo slows down Go programs due to function call overhead and garbage collection complexity, requiring minimized cross-boundary calls for performance.
Performance of Generics in Go: Monomorphization vs Dictionary Passing
Go implements generics via monomorphization, generating type-specific code at compile time to ensure zero runtime overhead compared to dictionary passing.
Performance of Iterators vs Slices in Go
Slices outperform iterators in Go due to lower overhead, making them the default choice for simple iteration tasks.
Profile with pprof
Profile Go programs by capturing CPU or memory data with runtime/pprof and analyzing it using the pprof tool.
Reduce GC pressure
GODEBUG settings control runtime feature compatibility and cannot be used to reduce garbage collection pressure or optimize memory usage.
Reduce memory allocations
Reduce memory allocations by using the arena package to allocate and free memory in bulk, bypassing the garbage collector.
Stack vs Heap in Go: How Allocation Decisions Are Made
Go uses escape analysis to automatically allocate variables to the stack or heap based on their lifetime.
sync Pool for performance
Use sync.Pool to cache and reuse objects, reducing allocation overhead and garbage collection pressure.
What Is PGO (Profile-Guided Optimization) in Go and How to Use It
PGO in Go optimizes code by using runtime profiling data to devirtualize calls and inline hot functions, requiring a profile generation step followed by a rebuild with the `-pgo` flag.