How to Use Channels as Semaphores in Go

Use a buffered channel with a capacity equal to the desired concurrency limit, where acquiring a permit means sending a value into the channel and releasing it means receiving a value.

Use a buffered channel with a capacity equal to the desired concurrency limit, where acquiring a permit means sending a value into the channel and releasing it means receiving a value. This pattern blocks goroutines when the channel is full, effectively limiting how many operations run simultaneously without needing explicit locks.

Here is a practical example limiting concurrent database connections to 5:

package main

import (
	"fmt"
	"sync"
	"time"
)

func main() {
	// Create a semaphore with a limit of 5
	sem := make(chan struct{}, 5)
	var wg sync.WaitGroup

	for i := 1; i <= 10; i++ {
		wg.Add(1)
		go func(id int) {
			defer wg.Done()

			// Acquire permit (blocks if 5 are already in use)
			sem <- struct{}{}
			defer func() { <-sem }() // Release permit

			fmt.Printf("Task %d started\n", id)
			time.Sleep(2 * time.Second) // Simulate work
			fmt.Printf("Task %d finished\n", id)
		}(i)
	}

	wg.Wait()
}

In this code, sem acts as a bucket holding up to 5 tokens. When a goroutine sends struct{}{} to sem, it consumes a token. If the bucket is full, the send operation blocks until another goroutine releases a token by receiving from the channel. The defer statement ensures the permit is always returned, even if the function panics.

You can also wrap this logic in a reusable function for cleaner code:

func limitConcurrency(limit int, jobs []func()) {
	sem := make(chan struct{}, limit)
	var wg sync.WaitGroup

	for _, job := range jobs {
		wg.Add(1)
		go func(f func()) {
			defer wg.Done()
			sem <- struct{}{} // Acquire
			defer func() { <-sem }() // Release
			f()
		}(job)
	}
	wg.Wait()
}

This approach is preferred over sync.Mutex for rate limiting because it naturally handles the blocking behavior and is less prone to deadlock if used correctly with defer. It is also more idiomatic for Go's concurrency model, leveraging channels for synchronization rather than just data transfer. Just remember that the channel capacity must be set at creation time; you cannot dynamically resize a channel.