Badger is a fast, embedded key-value store written in Go that you use by opening a database instance, performing transactions for writes, and iterating over keys for reads. It handles concurrency and persistence automatically, making it ideal for caching, session storage, or local data persistence without a separate server process.
First, install the library using go get:
go get github.com/dgraph-io/badger/v4
Here is a practical example showing how to open a database, write data within a transaction, and read it back:
package main
import (
"fmt"
"log"
"github.com/dgraph-io/badger/v4"
)
func main() {
// Open the database with default options
db, err := badger.Open(badger.DefaultOptions("/tmp/badgerDB"))
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Write data inside a transaction
err = db.Update(func(txn *badger.Txn) error {
k1 := []byte("name")
v1 := []byte("Alice")
err := txn.Set(k1, v1)
if err != nil {
return err
}
return nil
})
if err != nil {
log.Fatal(err)
}
// Read data inside a read-only transaction
err = db.View(func(txn *badger.Txn) error {
item, err := txn.Get([]byte("name"))
if err != nil {
return err
}
val, err := item.ValueCopy(nil)
if err != nil {
return err
}
fmt.Printf("Retrieved value: %s\n", string(val))
return nil
})
if err != nil {
log.Fatal(err)
}
}
Key things to remember: always wrap writes in db.Update and reads in db.View to ensure transactional safety. Badger stores data on disk by default, so it persists across restarts. For high-performance scenarios, you can tune options like MaxTableSize or ValueLogFileSize in the badger.Options struct before opening the database. If you need to iterate over a range of keys, use txn.NewIterator with badger.DefaultIteratorOptions to traverse the dataset efficiently.