Use the io.Copy function from the standard library to copy a file by reading from the source and writing to the destination in chunks, which is memory-efficient for large files. You must explicitly open both files, handle errors, and ensure the destination file is closed properly after the copy completes.
Here is a robust implementation using io.Copy with a buffer:
package main
import (
"fmt"
"io"
"os"
)
func copyFile(src, dst string) error {
sourceFile, err := os.Open(src)
if err != nil {
return fmt.Errorf("failed to open source: %w", err)
}
defer sourceFile.Close()
// Check if destination exists to avoid overwriting unexpectedly (optional)
if _, err := os.Stat(dst); err == nil {
// Optional: return error or handle overwrite logic here
}
destFile, err := os.Create(dst)
if err != nil {
return fmt.Errorf("failed to create destination: %w", err)
}
defer destFile.Close()
// io.Copy handles buffering automatically (default 32KB)
_, err = io.Copy(destFile, sourceFile)
if err != nil {
return fmt.Errorf("failed to copy data: %w", err)
}
// Sync to ensure data is written to disk before closing
if err := destFile.Sync(); err != nil {
return fmt.Errorf("failed to sync destination: %w", err)
}
return nil
}
func main() {
if err := copyFile("source.txt", "destination.txt"); err != nil {
fmt.Println("Error:", err)
} else {
fmt.Println("File copied successfully")
}
}
For a quick one-liner in a script where error handling is minimal, you can use os.WriteFile combined with os.ReadFile, but this loads the entire file into memory, making it unsuitable for large files:
data, err := os.ReadFile("source.txt")
if err != nil {
// handle error
}
err = os.WriteFile("destination.txt", data, 0644)
if err != nil {
// handle error
}
Always prefer io.Copy for production code as it streams data, keeping memory usage constant regardless of file size. Remember to check the return value of io.Copy to ensure the full file was transferred, and use defer for file closures to prevent resource leaks even if errors occur.