Write effective Go benchmarks by ensuring your Benchmark functions run the target code inside a loop controlled by b.N, avoiding premature optimization, and using b.ResetTimer() to exclude setup costs. Always verify that your benchmark is measuring what you intend by checking for compiler optimizations that might eliminate your code entirely.
The most common mistake is benchmarking code that the compiler optimizes away. If your benchmark function performs a calculation but never uses the result, the compiler will remove it, leading to artificially fast times. To prevent this, you must use the result in a way the compiler cannot predict or eliminate, typically by storing it in a variable marked as volatile or by using the b.ReportAllocs() helper to ensure memory operations are tracked.
Here is a practical example of a correct benchmark that calculates a factorial and ensures the result is used:
func BenchmarkFactorial(b *testing.B) {
// Setup code runs once per benchmark run
input := 20
for i := 0; i < b.N; i++ {
// The code to measure
result := factorial(input)
// Prevent optimization by using the result
// The compiler cannot eliminate this because the variable is used
if result == 0 {
panic("unexpected")
}
}
}
func factorial(n int) int {
if n <= 1 {
return 1
}
return n * factorial(n-1)
}
If your benchmark requires significant setup (like loading a large file or initializing a database connection) that you don't want to include in the timing, use b.ResetTimer() after the setup and before the loop. This tells the benchmark runner to ignore the time elapsed so far.
func BenchmarkDBQuery(b *testing.B) {
// Setup: Connect to DB (excluded from timing)
db := connectToDB()
defer db.Close()
b.ResetTimer() // Start timing here
for i := 0; i < b.N; i++ {
// Actual work to measure
_, err := db.Query("SELECT * FROM users WHERE id = ?", i)
if err != nil {
b.Fatal(err)
}
}
}
Run your benchmarks with the -benchmem flag to see memory allocation statistics, which is crucial for identifying allocation-heavy code paths. Use the -benchtime flag to run for a specific duration (e.g., 1s) rather than a fixed number of iterations, which provides more stable results on varying hardware.
go test -bench=. -benchmem -benchtime=1s ./...
Finally, always compare benchmarks against a baseline. A single number is meaningless without context. Use the go test -bench=BenchmarkFactorial -benchmem output to compare changes before and after refactoring, and ensure you run the benchmark multiple times to account for system noise.