This article is split into two parts so you can use it as either a reference or a hands-on tutorial.
Part 1 is a tour of Go's language fundamentals - everything from := to
goroutines and context.Context. Part 2 takes those building blocks and walks
you through creating a small product-management microservice using
hexagonal architecture, CQRS, interfaces,
gin, zerolog, google/uuid, and uber/fx for
dependency injection.
By the end you will understand both the what (Go syntax and stdlib) and the why (how to structure a Go service so its core business logic stays independent of the web framework, the database, the logger, or any other infrastructure choice).
go command).go commandsgo mod init - initialises a new module (creates go.mod).go get - adds or updates a dependency.go get github.com/sirupsen/logrus@v1.10.0go mod download - downloads everything listed in go.mod into the module cache
($GOPATH/pkg/mod) without changing go.mod or go.sum. It verifies
downloads against go.sum and fails on a checksum mismatch.go mod tidy - synchronises module files with your code:
go.mod;go.mod that the code no longer uses;go.sum checksums accordingly;go build - compiles your code into an executable named after the folder
(or use -o to choose).go run main.go - compiles AND runs immediately (no binary kept around).go test ./... - runs every *_test.go in the module.go test -v = verbose, go test -cover = coverage.
go env - shows Go env vars (GOPATH, GOROOT, etc.).go list -m all - lists every module the project depends on.
go build on Windows produces app.exe; on macOS / Linux it produces an
extension-less binary like ./app. The binary is self-contained - it can be shipped to a machine
that doesn't even have Go installed.
main package is special: it defines the program's entry point and must contain
func main().
main functions in the same package.go.mod file at its
root.// go.mod
module myapp
go 1.20
require github.com/sirupsen/logrus v1.10.0
go.mod vs go.sumgo.mod - contains which versions of dependencies your module uses (similar to
pom.xml in Maven projects).
go.sum - contains checksums for all dependencies listed in go.mod.
A checksum is a hash of a file or data that ensures data integrity; in Go modules,
it verifies that downloaded dependencies haven't been tampered with.init() functionRuns automatically before main(); useful for one-off setup.
func init() {
fmt.Println("initialized")
}
:=var a int = 10
var b = 20 // type inferred
c := 30 // short declaration, only inside functions
const pi = 3.14
const MAX int = 100
var investmentAmount, years float64 = 1000, 10
var x, name = 1000, "ten" // multiple inferred types in one line
Every uninitialised variable gets a default zero value:
int -> 0float -> 0.0bool -> falsestring -> ""pointer, slice, map -> nilvar amount float64
fmt.Print("Investment Amount: ")
fmt.Scan(&amount) // & passes a pointer so Scan can write into 'amount'
fmt.Println("Hello", "World") // Hello World\n
fmt.Printf("%s is %d years old\n", n, a) // Alice is 25 years old
msg := fmt.Sprintf("%s is %d", n, a) // returns formatted string
Go has only for - no while.
// classic for
for i := 0; i < 2; i++ { /* ... */ }
// "while"
for someBool { /* ... */ }
// infinite loop
for { /* ... */ }
break exits the loop.continue jumps to the next iteration.switchYou don't need break between cases - only one case ever runs. Note: break
inside a switch only exits the switch, not any enclosing loop. Use an
if if you need to break out of a loop from inside a switch.
Go avoids exceptions. Functions that can fail return an extra error value:
import "errors"
func getBalanceFromFile() (string, error) {
data, err := os.ReadFile("balance.txt")
if err != nil {
return "", errors.New("failed to find balance file")
}
return string(data), nil
}
rangefor i, v := range slice { /* ... */ }
for k, v := range mapVar { /* ... */ }
for i, r := range "hello" { /* r is a rune */ }
_Used to ignore values you don't need:
_, err := someFunc()
age := 32
agePtr := &age // type: *int
fmt.Println(*agePtr) // 32 - dereference to read
*agePtr = 20 // write through the pointer
Pointers avoid copying large structs when passing them as arguments.
| Feature | Array | Slice |
|---|---|---|
| Size | Fixed | Dynamic |
| Type | [5]int |
[]int |
arr := [3]int{1, 2, 3} // type [3]int
sli := []int{1, 2, 3} // type []int
When loading data from a database you can't know the count upfront, so you almost always use slices. Internally a slice is backed by an array; when you append past its capacity, Go allocates a new (larger) array and copies into it.
package main
import "fmt"
type User struct {
firstName string
lastName string
}
// Constructor convention: NewUser to be exportable, newUser for package-private.
func newUser(firstName, lastName string) User {
return User{firstName, lastName}
}
// "(u User)" is a value receiver - the method gets a COPY.
func (u User) outputUserDetails() {
fmt.Println(u.firstName, u.lastName)
}
// "(u *User)" is a pointer receiver - the method can MUTATE the original.
func (u *User) clearFirstName() { u.firstName = "" }
func main() {
appUser := User{firstName: "Ilman", lastName: "Iqbal"}
appUser.outputUserDetails() // prints original
appUser.clearFirstName() // mutates original
appUser.outputUserDetails() // first name is empty now
}
Structs have a fixed set of fields known at compile time. Maps have arbitrary keys you can add at runtime.
m := make(map[string]int)
m["a"] = 1
websites := map[string]string{
"Google": "https://google.com",
"AWS": "https://aws.com",
}
fmt.Println(websites["Google"])
websites["Linkedin"] = "https://linkedin.com"
delete(websites, "Google")
Note: built-in maps are not safe for concurrent use. For shared maps
between goroutines use sync.Mutex / sync.RWMutex or sync.Map.
int, int32, int64float32, float64string, boolbyte -> alias for uint8 (raw 8-bit)rune -> alias for int32 (a Unicode code point)byte vs rune (a critical difference)var word string = "ToƱito"
for _, r := range word {
fmt.Printf("rune: %v, string: %s\n", r, string(r))
}
fmt.Println("len(word):", len(word)) // 7 - bytes
fmt.Println("len([]rune(word)):", len([]rune(word))) // 6 - runes
Use rune when working with Unicode text, counting characters, validating input, or
processing user-entered text.
import "unicode"
for _, r := range s {
if unicode.IsDigit(r) {
fmt.Println("Digit")
}
}
make vs newnew - allocates and returns a pointer to a zero value.make - initialises a slice, map, or channel (cannot be used for arrays).p := new(int) // *int, pointing at 0
s := make([]int, 5) // []int{0,0,0,0,0}
make([]int, 0, 10) // empty, capacity 10 - good when you'll append a known max
make([]int, 0) // empty, no capacity hint
make([]int, 10) // ten zero values - good when you'll assign by index
make(map[int]bool) // empty, no size hint
make(map[int]bool, 5) // empty, capacity HINT 5 (perf, not a fixed limit)
make(chan int) // unbuffered. Send blocks until a receiver is ready.
make(chan int, 2) // buffered, capacity 2. Send blocks only when buffer is full.
func add(a, b int) int { return a + b }
// multiple return values - extremely common with errors
func divide(a, b int) (int, error) {
if b == 0 {
return 0, errors.New("division by zero")
}
return a / b, nil
}
defer, panic, and recoverdefer - executes the call when the surrounding function returns. LIFO order.panic - aborts the normal flow and starts unwinding.recover - regains control of a panicking goroutine; only works inside a deferred
function.func processRequest() {
defer fmt.Println("Cleanup: closing DB connection")
defer func() {
if r := recover(); r != nil {
fmt.Println("Recovered from panic:", r)
}
}()
fmt.Println("Processing request...")
panic("database connection lost")
fmt.Println("Never executes")
}
func main() {
processRequest()
fmt.Println("Service continues running")
}
/* Output:
Processing request...
Recovered from panic: database connection lost
Cleanup: closing DB connection
Service continues running
*/
runtime.GOMAXPROCS(runtime.NumCPU()).Reuse a fixed pool of goroutines to process a stream of jobs - this caps CPU and memory usage.
func worker(id int, jobs <-chan int) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Second)
}
}
func main() {
const numWorkers, numJobs = 3, 10
jobs := make(chan int)
for i := 1; i <= numWorkers; i++ {
go worker(i, jobs)
}
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
time.Sleep(5 * time.Second)
}
sync.Mutex
Only one goroutine can hold the lock at any time. Anyone else calling Lock() blocks until
the holder calls Unlock().
var mu sync.Mutex
counter := 0
go func() {
mu.Lock()
counter++
mu.Unlock()
}()
go func() {
mu.Lock()
fmt.Println(counter)
mu.Unlock()
}()
sync.RWMutex - reader/writer locks
RWMutex distinguishes between readers and writers. Multiple readers can hold the lock
at the same time; writers are exclusive.
rw.Lock() // write lock - blocks all reads & other writes
rw.Unlock()
rw.RLock() // read lock - allows other readers, blocks writers
rw.RUnlock()
With a plain Mutex, three goroutines reading the same map serialise:
Goroutine 1: LOCK -> READ -> UNLOCK
Goroutine 2: WAIT -> LOCK -> READ -> UNLOCK
Goroutine 3: WAIT -> LOCK -> READ -> UNLOCK
With RWMutex they all read concurrently:
Goroutine 1: RLOCK -> READ -> RUNLOCK
Goroutine 2: RLOCK -> READ -> RUNLOCK
Goroutine 3: RLOCK -> READ -> RUNLOCK
Rule of thumb: mostly reads -> RWMutex; many writes or simple code
-> plain Mutex. RLock() must only wrap reads -
writing while holding an RLock is a race condition.
Go's RWMutex gives writer priority: if a writer is waiting, new readers
are blocked until that writer finishes. This avoids writer starvation.
sync.OnceEnsures a piece of code runs exactly once - perfect for lazy initialisation.
var (
prom *ginprometheus.Prometheus
once sync.Once
)
func middleware() {
once.Do(func() {
prom = ginprometheus.NewPrometheus("gin")
})
}
sync.MapA concurrent map optimised for read-heavy workloads (does not use an RWMutex
internally).
var m sync.Map
m.Store("key", "value")
v, ok := m.Load("key")
sync/atomicAtomic operations complete in a single CPU instruction - no other goroutine can ever see a half-written value, and there's no lock to deadlock.
import "sync/atomic"
var counter int64
go func() { atomic.AddInt64(&counter, 1) }()
Don't use multiple atomic ops to keep two related variables in sync - they aren't a single instruction together.
time.Tickerticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for i := 1; i <= 10; i++ {
<-ticker.C
fmt.Println("Processing request", i, "at", time.Now())
}
For a rate limiter at N requests per second:
rate := 5
ticker := time.NewTicker(time.Second / time.Duration(rate))
context.Context
context.Context carries cancellation signals, timeouts, and request-scoped values.
It is heavily used in HTTP handlers, DB drivers, and any RPC client.
// 1. Timeout - auto-cancels after 2s
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel() // always release resources
go func(ctx context.Context) {
select {
case <-time.After(3 * time.Second):
fmt.Println("Work completed")
case <-ctx.Done():
fmt.Println("Context cancelled:", ctx.Err())
}
}(ctx)
// 2. Request-scoped values
ctx = context.WithValue(context.Background(), "userID", 12345)
v := ctx.Value("userID")
// 3. Manual cancellation
ctx2, cancel2 := context.WithCancel(context.Background())
cancel2() // triggers ctx2.Done()
net/http packagehttp.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("OK"))
})
http.ListenAndServe(":8080", nil)
ServeMux; Go 1.22+ supports method-prefixed patterns like
"GET /products").
A channel is a typed pipe used by goroutines to send and receive values safely. There are two categories: unbuffered and buffered.
make(chan T))Capacity 0 - sender and receiver must "shake hands" at the same instant.
ch := make(chan int)
go func() {
time.Sleep(time.Second)
fmt.Println("Received:", <-ch)
}()
ch <- 1 // blocks until the goroutine reads
select {
case ch <- 3:
fmt.Println("Sent")
default:
fmt.Println("Send would block (no receiver)")
}
make(chan T, n))ch := make(chan int, 1)
ch <- 1 // buffer was empty -> succeeds
go func() {
time.Sleep(time.Second)
fmt.Println("Received:", <-ch)
}()
ch <- 2 // blocks until the receiver frees a slot
chan struct{})A channel of empty struct carries no data, consumes zero bytes per send, and is the idiomatic way to notify completion or coordinate shutdowns.
done := make(chan struct{})
go func() {
time.Sleep(time.Second)
done <- struct{}{}
}()
<-done
fmt.Println("Main received signal")
sync.WaitGroup
Without it, main() can finish before its child goroutines run. WaitGroup is
a counter: tell it how many goroutines to wait for, mark each as done, and Wait() until
the counter reaches zero.
var wg sync.WaitGroup
wg.Add(3)
for i := 1; i <= 3; i++ {
go func(n int) {
defer wg.Done()
fmt.Println("Worker", n)
}(i)
}
wg.Wait()
A deadlock is "all goroutines are asleep, waiting for something that will never happen." The most
common cause is forgetting an Unlock. The fix is to always pair them with
defer:
mu.Lock()
defer mu.Unlock()
Two or more goroutines touch the same memory and at least one writes, without synchronisation. Go ships a built-in race detector:
go run -race main.go
If it prints WARNING: DATA RACE, you forgot a lock or a channel.
var mu sync.Mutex
var wg sync.WaitGroup
counter := 0
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock()
counter++
mu.Unlock()
}()
}
wg.Wait()
fmt.Println(counter) // 1000
counter := 0
ch := make(chan int)
var wg sync.WaitGroup
go func() { for v := range ch { counter += v } }()
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() { defer wg.Done(); ch <- 1 }()
}
wg.Wait(); close(ch)
fmt.Println(counter)
| Mutex | Channel |
|---|---|
| Protects shared memory | Avoids shared memory |
| Faster for simple cases | Safer & clearer for workflows |
| Easy to misuse (deadlocks) | Can block if misused |
| Lower overhead | More expressive |
Go's idiomatic DI is plain constructor injection - no framework required:
type Service struct { repo Repo }
func NewService(r Repo) *Service { return &Service{repo: r} }
(In Part 2 we'll graduate to uber/fx for larger graphs.)
pprof.sync.Pool.context.Context everywhere.That covers the language. In Part 2 we put these primitives to work in a small but production-shaped microservice.
product-management microservice
We will build a tiny Go service called product-management. It exposes two HTTP endpoints:
POST /products - add a product.GET /products - list products.
The functionality is trivial on purpose. The point is the shape of the code:
hexagonal architecture, a CQRS-style command/query split,
interfaces for ports, and uber/fx for dependency injection.
We'll walk through it step by step, starting from the smallest possible main.go.
Hexagonal architecture - also called ports and adapters - is a way of organising code so the core business logic doesn't depend on the framework, the database, or any other external concern. Instead, the core defines ports (interfaces) that describe what it needs from the outside world, and adapters are the concrete things that plug into those ports.
Three layers, in order from inside to outside:
The dependency direction always points inward: adapters depend on application, application depends on domain, and the domain depends on nothing. Swap MongoDB for Postgres? Write a new outbound adapter - nothing else changes.
Command Query Responsibility Segregation: split your code into a "write side" that
executes commands (e.g. AddProduct) and a "read side" that runs queries
(e.g. ListProducts). The two sides can use different code paths and even different
data stores. In our example we keep the same in-memory store but expose it under two separate
interfaces (ProductWriter and ProductReader) so each side
only sees what it needs.
Interfaces are how Go expresses "I depend on something that can do X, but I don't care exactly
who or how." They make the inversion of dependencies in hex architecture possible. The application
doesn't import a Mongo driver - it imports its own ProductWriter interface, which
happens to be implemented by an outbound adapter that uses Mongo (or Postgres, or just a
map in memory).
We begin with the simplest possible main.go - just enough to serve a single endpoint:
package main
import (
"fmt"
"github.com/gin-gonic/gin"
)
func main() {
fmt.Println("Hello, World!")
server := gin.Default()
server.GET("/products", getProducts)
server.Run(":8080")
}
func getProducts(c *gin.Context) {
c.JSON(200, gin.H{"name": "Car Wipers"})
}
That works, but everything is mixed together: routing, business logic, and "data" (a hard-coded string). We can't test the logic without spinning up a server, and there's no place to plug in a real product store. Time to refactor.
product-management/
|-- main.go <- composition root
|-- go.mod
|-- domain/product/ <- pure business rules, no I/O
| |-- product.go <- Product entity + value types
| `-- errors.go
|-- application/ <- use cases (drives the domain)
| |-- ports.go <- ProductWriter, ProductReader, IDGenerator
| |-- command_service.go <- write side of CQRS (AddProduct)
| `-- query_service.go <- read side of CQRS (ListProducts)
`-- adapters/ <- all I/O lives here
|-- in/http/ <- inbound: HTTP endpoints
`-- out/memory/ <- outbound: in-memory map (the "database")
The Product entity exposes a constructor that enforces invariants. Fields are
unexported so an invalid Product can't be created with a struct literal.
// domain/product/product.go
package product
import "strings"
type ID string
type Name string
type Price int64 // minor units (e.g. cents)
type Product struct {
id ID
name Name
price Price
}
func New(id ID, name Name, price Price) (Product, error) {
if strings.TrimSpace(string(name)) == "" {
return Product{}, ErrEmptyName
}
if price <= 0 {
return Product{}, ErrInvalidPrice
}
return Product{id: id, name: name, price: price}, nil
}
func (p Product) ID() ID { return p.id }
func (p Product) Name() Name { return p.name }
func (p Product) Price() Price { return p.price }
// domain/product/errors.go
package product
import "errors"
var (
ErrEmptyName = errors.New("product name is empty")
ErrInvalidPrice = errors.New("product price must be positive")
ErrAlreadyExists = errors.New("product already exists")
)
Ports are tiny interfaces that describe what the application needs. The split between
ProductWriter and ProductReader is the CQRS pattern at the type level.
// application/ports.go
package application
import (
"context"
"cmd/product-management/domain/product"
)
type ProductWriter interface {
Save(ctx context.Context, p product.Product) error
}
type ProductReader interface {
List(ctx context.Context) ([]product.Product, error)
}
type IDGenerator interface { NewID() product.ID }
type ProductView struct {
ID string `json:"id"`
Name string `json:"name"`
Price int64 `json:"price"`
}
func viewOf(p product.Product) ProductView {
return ProductView{ID: string(p.ID()), Name: string(p.Name()), Price: int64(p.Price())}
}
// application/command_service.go
package application
import (
"context"
"fmt"
"cmd/product-management/domain/product"
)
type CommandService struct {
writer ProductWriter
ids IDGenerator
}
func NewCommandService(w ProductWriter, ids IDGenerator) *CommandService {
return &CommandService{writer: w, ids: ids}
}
type AddProductCommand struct {
Name string
Price int64
}
type AddProductResult struct{ ID string }
func (s *CommandService) AddProduct(ctx context.Context, cmd AddProductCommand) (AddProductResult, error) {
id := s.ids.NewID()
p, err := product.New(id, product.Name(cmd.Name), product.Price(cmd.Price))
if err != nil {
return AddProductResult{}, fmt.Errorf("add product: %w", err)
}
if err := s.writer.Save(ctx, p); err != nil {
return AddProductResult{}, fmt.Errorf("save product: %w", err)
}
return AddProductResult{ID: string(id)}, nil
}
// application/query_service.go
package application
import "context"
type QueryService struct{ reader ProductReader }
func NewQueryService(r ProductReader) *QueryService { return &QueryService{reader: r} }
func (s *QueryService) ListProducts(ctx context.Context) ([]ProductView, error) {
products, err := s.reader.List(ctx)
if err != nil { return nil, err }
views := make([]ProductView, 0, len(products))
for _, p := range products { views = append(views, viewOf(p)) }
return views, nil
}
One map[ID]Product behind a sync.RWMutex. The same instance will be
wrapped under both ProductWriter and ProductReader in main.go.
// adapters/out/memory/products.go
package memory
import (
"context"
"sort"
"sync"
"cmd/product-management/domain/product"
)
type ProductRepository struct {
mu sync.RWMutex
store map[product.ID]product.Product
}
func NewProductRepository() *ProductRepository {
return &ProductRepository{store: make(map[product.ID]product.Product)}
}
func (r *ProductRepository) Save(ctx context.Context, p product.Product) error {
r.mu.Lock()
defer r.mu.Unlock()
if _, exists := r.store[p.ID()]; exists {
return product.ErrAlreadyExists
}
r.store[p.ID()] = p
return nil
}
func (r *ProductRepository) List(ctx context.Context) ([]product.Product, error) {
r.mu.RLock()
out := make([]product.Product, 0, len(r.store))
for _, p := range r.store { out = append(out, p) }
r.mu.RUnlock()
sort.Slice(out, func(i, j int) bool { return out[i].Name() < out[j].Name() })
return out, nil
}
Two handlers, decoded from JSON, calling the application services. We use net/http's
built-in ServeMux here (it supports method-prefixed patterns since Go 1.22).
// adapters/in/http/handler.go
package http
import (
"log/slog"
"net/http"
"cmd/product-management/application"
)
type Handler struct {
commands *application.CommandService
queries *application.QueryService
logger *slog.Logger
}
func NewHandler(c *application.CommandService, q *application.QueryService, l *slog.Logger) *Handler {
return &Handler{commands: c, queries: q, logger: l}
}
func (h *Handler) Routes() http.Handler {
mux := http.NewServeMux()
mux.HandleFunc("GET /products", h.listProducts)
mux.HandleFunc("POST /products", h.addProduct)
return mux
}
// adapters/in/http/product_handler.go
package http
import (
"encoding/json"
"errors"
"log/slog"
"net/http"
"cmd/product-management/application"
"cmd/product-management/domain/product"
)
func (h *Handler) listProducts(w http.ResponseWriter, r *http.Request) {
views, err := h.queries.ListProducts(r.Context())
if err != nil {
h.logger.ErrorContext(r.Context(), "list products failed", slog.Any("error", err))
writeError(w, http.StatusInternalServerError, "failed to list products")
return
}
writeJSON(w, http.StatusOK, map[string]any{"products": views})
}
type addReq struct { Name string `json:"name"`; Price int64 `json:"price"` }
func (h *Handler) addProduct(w http.ResponseWriter, r *http.Request) {
var req addReq
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid JSON body")
return
}
res, err := h.commands.AddProduct(r.Context(), application.AddProductCommand{Name: req.Name, Price: req.Price})
if err != nil {
switch {
case errors.Is(err, product.ErrEmptyName),
errors.Is(err, product.ErrInvalidPrice),
errors.Is(err, product.ErrAlreadyExists):
writeError(w, http.StatusBadRequest, err.Error())
default:
writeError(w, http.StatusInternalServerError, "failed to add product")
}
return
}
writeJSON(w, http.StatusCreated, map[string]string{"id": res.ID})
}
main.go)
main.go is the only place that knows about both interfaces and concrete adapters.
It picks the implementations and wires them together.
package main
import (
"context"
"crypto/rand"
"encoding/hex"
"errors"
"log/slog"
"net/http"
"os"
"os/signal"
"syscall"
"time"
httpadapter "cmd/product-management/adapters/in/http"
"cmd/product-management/adapters/out/memory"
"cmd/product-management/application"
"cmd/product-management/domain/product"
)
type randomIDs struct{}
func (randomIDs) NewID() product.ID {
var b [16]byte
if _, err := rand.Read(b[:]); err != nil {
return product.ID(time.Now().UTC().Format("20060102T150405.000000000"))
}
return product.ID(hex.EncodeToString(b[:]))
}
func main() {
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
repo := memory.NewProductRepository()
commands := application.NewCommandService(repo, randomIDs{})
queries := application.NewQueryService(repo)
handler := httpadapter.NewHandler(commands, queries, logger).Routes()
server := &http.Server{Addr: ":8080", Handler: handler, ReadHeaderTimeout: 5 * time.Second}
// graceful shutdown on SIGINT/SIGTERM
stop := make(chan os.Signal, 1)
signal.Notify(stop, os.Interrupt, syscall.SIGTERM)
go func() {
if err := server.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
logger.Error("http server error", slog.Any("error", err))
os.Exit(1)
}
}()
<-stop
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
_ = server.Shutdown(ctx)
}
go build ./...
go vet ./...
go run .
# in another terminal
curl -X POST -H 'Content-Type: application/json' \
-d '{"name":"Car Wipers","price":1999}' \
http://localhost:8080/products
# => 201 {"id":"..."}
curl http://localhost:8080/products
# => 200 {"products":[{"id":"...","name":"Car Wipers","price":1999}]}
go.mod
At this point the entire go.mod is just three lines:
module cmd/product-management
go 1.22
Why? Because the project doesn't import anything outside the Go standard library,
so there's nothing for go.mod to declare. Every import statement in the
project resolves to either (a) a stdlib package, or (b) a package inside this same module.
require line, ever)| Import | Used by | Why |
|---|---|---|
net/http |
HTTP adapter, main.go |
web server + the ServeMux router |
encoding/json |
HTTP adapter | request/response (de)serialisation |
log/slog |
HTTP adapter, main.go |
structured logging |
context |
everywhere | request-scoped cancellation |
errors |
HTTP adapter, domain, main.go |
errors.Is, sentinel errors |
fmt |
application, adapters | fmt.Errorf wrapping |
strings |
domain | TrimSpace for name validation |
sort |
memory adapter | stable list ordering |
sync |
memory adapter | RWMutex around the map |
crypto/rand, encoding/hex |
main.go |
random hex IDs |
time |
main.go |
shutdown timeout, ID fallback |
os, os/signal, syscall |
main.go |
graceful shutdown on SIGINT/SIGTERM |
cmd/product-management/domain/productcmd/product-management/applicationcmd/product-management/adapters/in/httpcmd/product-management/adapters/out/memory
The original go.mod we started with had gin plus a long list of
// indirect entries (bytedance/sonic, quic-go,
mongo-driver, etc. - all of them gin's transitive deps). Once we replaced gin with
net/http from the standard library, every one of those became unreachable from the
import graph. go mod tidy would have removed them automatically.
For each of the things you'd typically reach for in a real project, the standard library has a credible answer:
| What you'd typically reach for | Stdlib equivalent we used instead |
|---|---|
gin / chi / echo for routing |
net/http ServeMux (Go 1.22+ supports
"GET /products" patterns)
|
zerolog / zap for logging |
log/slog (added in Go 1.21) |
google/uuid for IDs |
crypto/rand + encoding/hex |
This is why the go.mod stayed empty - by design, not by accident.
The moment we want a real DB or a richer router, we'd add a require line and run
go mod tidy.
Stdlib is great for clarity, but most real services prefer ecosystem libraries:
net/http ServeMux ->
github.com/gin-gonic/ginlog/slog -> github.com/rs/zerologcrypto/rand + encoding/hex ->
github.com/google/uuid
The beautiful thing is that only the HTTP adapter and main.go change.
Domain, application ports, and the in-memory repo are byte-for-byte identical - that's the architecture
earning its keep.
go get github.com/gin-gonic/gin@latest
go get github.com/google/uuid@latest
go get github.com/rs/zerolog@latest
go mod tidy
The HTTP handler now uses gin and zerolog:
// adapters/in/http/handler.go (after migration)
func (h *Handler) Routes() http.Handler {
engine := gin.New()
engine.Use(gin.Recovery())
engine.HandleMethodNotAllowed = true // gin returns 404 by default for wrong methods
engine.GET("/products", h.listProducts)
engine.POST("/products", h.addProduct)
return engine
}
main.go swaps in uuid and zerolog:
type uuidIDs struct{}
func (uuidIDs) NewID() product.ID { return product.ID(uuid.NewString()) }
func main() {
gin.SetMode(gin.ReleaseMode)
logger := zerolog.New(os.Stdout).Level(zerolog.InfoLevel).With().Timestamp().Logger()
// ...
}
After running go mod tidy, go.mod now declares gin,
google/uuid, and rs/zerolog as direct dependencies, plus the transitive
ones gin pulls in (bytedance/sonic, go-playground/validator, etc.) as
// indirect.
When you run go run, go build, or go get, the Go command
looks at the go directive at the top of go.mod:
module cmd/product-management
go 1.22
That line says: the minimum Go version this module needs is 1.22. Since Go 1.21, the
toolchain has a feature called automatic toolchain switching: if your locally
installed Go is older than what go.mod requires, Go will quietly download a newer
toolchain on the fly.
That convenience can bite you in two situations:
go.mod says go 1.25.0
but you have Go 1.24.5 installed and no network access, the build fails with a cryptic error
rather than just using your local Go.I hit the first case while writing this post. The original go.mod had:
go 1.25.0
but the available Go was 1.24.5, so the build failed with:
go: toolchain upgrade needed to resolve github.com/gin-gonic/gin
go: github.com/gin-gonic/gin@v1.12.0 requires go >= 1.25.0
GOTOOLCHAIN environment variable
Setting GOTOOLCHAIN=local tells Go to only use whatever is installed locally
and never to download a newer toolchain:
GOTOOLCHAIN=local go build ./...
GOTOOLCHAIN=local go run .
If a dependency requires a Go version newer than your local one, the build will simply fail rather than auto-fetching - which is exactly what you want in CI, sandboxes, or when you're debugging "why did this build differently on my machine?".
toolchain directive in go.mod
When go get pulls in a dependency that needs a newer Go than your current
go directive allows, it does two things to go.mod:
go directive up to the dependency's minimum.toolchain line pinning the toolchain that was used.
For example, after go get go.uber.org/fx@latest on our project,
go.mod went from this:
module cmd/product-management
go 1.22
...to this:
module cmd/product-management
go 1.23
toolchain go1.24.5
What do those two lines actually mean?
go 1.23 - the minimum Go version anyone needs to compile this
module. Some dependency we just pulled in requires at least 1.23.toolchain go1.24.5 - the suggested toolchain that was used when
we last built. If a teammate has only Go 1.21 installed, Go will try to fetch
go1.24.5 for them automatically (unless they set
GOTOOLCHAIN=local).
gin v1.10.1 instead of @latest
At the time of writing, the latest gin was v1.12.0, and it requires
Go >= 1.25.0. Our local Go was 1.24.5 and we wanted to keep the local toolchain. So instead of
triggering an auto-download, I pinned to gin v1.10.1, which supports Go >= 1.20:
go get github.com/gin-gonic/gin@v1.10.1
Once your local Go is upgraded to 1.25+, you can switch back to
go get github.com/gin-gonic/gin@latest and go.mod will auto-bump.
| Situation | What to do |
|---|---|
| Building in a sandboxed / offline / CI environment | GOTOOLCHAIN=local |
| You want strict reproducibility over which Go is used | GOTOOLCHAIN=local |
| You want Go to fetch the right version automatically | leave GOTOOLCHAIN unset (default is auto) |
| Adding a dep that requires a newer Go than yours | pin to an older version of the dep, or upgrade your local Go |
go.mod says e.g. go 1.25.0 but your local Go is older |
upgrade Go, lower the go directive, or set
GOTOOLCHAIN=local and pick deps that don't need newer
|
TL;DR: the go line in go.mod is the
minimum Go version your module needs; the toolchain line is the
preferred toolchain to build it with. GOTOOLCHAIN=local opts you out
of automatic toolchain downloads - useful in CI, sandboxes, or anywhere you want full control
over which Go binary actually compiles your code.
uber/fx for dependency injection
Up to now, our main.go manually wired every constructor:
repo := memory.NewProductRepository()
commands := application.NewCommandService(repo, uuidIDs{})
queries := application.NewQueryService(repo)
handler := httpadapter.NewHandler(commands, queries, logger).Routes()
server := &http.Server{Addr: ":8080", Handler: handler}
This works fine for four components, but as the graph grows you have to keep track of construction
order by hand, signal handling and graceful shutdown end up sprinkled across main.go,
and writing integration tests means re-implementing the wiring with stubs.
uber/fx?
uber/fx is a dependency injection framework. You hand it a bag of constructors via
fx.Provide(...); fx introspects each constructor's parameter and return
types, builds the dependency graph, and runs constructors in the right order. Each value is built
at most once and reused everywhere it's needed. On top of that, fx provides a
Lifecycle that lets you register OnStart and OnStop hooks
for clean startup and graceful shutdown, an fx.Shutdowner for triggering shutdown
programmatically, and automatic SIGINT / SIGTERM handling via app.Run().
fx.Provide(...) - register one or more constructors. Purely
declarative; nothing runs yet.fx.Invoke(...) - register a function whose execution is
the goal. fx will build everything that function needs (transitively), then call it.fx.Lifecycle - injected automatically. Use
lc.Append(fx.Hook{OnStart, OnStop}) to plug into the app's start / stop sequence.
fx.Shutdowner - injected automatically. Call
shutdowner.Shutdown(fx.ExitCode(1)) to trigger graceful shutdown from anywhere
in the graph.
fx.WithLogger(...) - lets you replace fx's default startup
logger.fx.Replace / fx.Decorate - swap or wrap a provided
value (the testing seam).appOptions / newApp / main pattern
This is the same pattern used in larger Uber-style services and matches what we use in production
on similar projects. main() shrinks to a single line:
// main.go - with uber/fx
package main
import (
"context"
"errors"
"net/http"
"os"
"time"
httpadapter "cmd/product-management/adapters/in/http"
"cmd/product-management/adapters/out/memory"
"cmd/product-management/application"
"cmd/product-management/domain/product"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
"github.com/rs/zerolog"
"go.uber.org/fx"
"go.uber.org/fx/fxevent"
)
const (
httpAddr = ":8080"
shutdownTimeout = 5 * time.Second
)
type uuidIDs struct{}
func (uuidIDs) NewID() product.ID { return product.ID(uuid.NewString()) }
func newLogger() zerolog.Logger {
return zerolog.New(os.Stdout).Level(zerolog.InfoLevel).With().Timestamp().Logger()
}
func asIDGenerator() application.IDGenerator { return uuidIDs{} }
func asWriter(r *memory.ProductRepository) application.ProductWriter { return r }
func asReader(r *memory.ProductRepository) application.ProductReader { return r }
func newHTTPHandler(h *httpadapter.Handler) http.Handler { return h.Routes() }
func newHTTPServer(handler http.Handler) *http.Server {
return &http.Server{Addr: httpAddr, Handler: handler, ReadHeaderTimeout: 5 * time.Second}
}
// registerHTTPServer wires the *http.Server into fx's lifecycle.
func registerHTTPServer(
lc fx.Lifecycle,
logger zerolog.Logger,
server *http.Server,
shutdowner fx.Shutdowner,
) {
lc.Append(fx.Hook{
OnStart: func(ctx context.Context) error {
go func() {
logger.Info().Str("addr", server.Addr).Msg("http server starting")
if err := server.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
logger.Error().Err(err).Msg("http server error")
_ = shutdowner.Shutdown(fx.ExitCode(1))
}
}()
return nil
},
OnStop: func(ctx context.Context) error {
shutdownCtx, cancel := context.WithTimeout(ctx, shutdownTimeout)
defer cancel()
return server.Shutdown(shutdownCtx)
},
})
}
// appOptions returns the fx options describing the application graph.
// Splitting it out lets tests inject overrides via fx.Replace / fx.Decorate
// without redefining the whole graph.
func appOptions(overrides ...fx.Option) []fx.Option {
gin.SetMode(gin.ReleaseMode)
opts := []fx.Option{
// Silence fx's own startup chatter; in production a service
// would bridge fxevent into its main logger.
fx.WithLogger(func() fxevent.Logger { return fxevent.NopLogger }),
fx.Provide(
// Infrastructure
newLogger,
asIDGenerator,
// Outbound adapter: in-memory repository, exposed under both ports.
memory.NewProductRepository,
asWriter,
asReader,
// Application services (use cases)
application.NewCommandService,
application.NewQueryService,
// Inbound HTTP adapter
httpadapter.NewHandler,
newHTTPHandler,
newHTTPServer,
),
fx.Invoke(registerHTTPServer),
}
return append(opts, overrides...)
}
func newApp(overrides ...fx.Option) *fx.App { return fx.New(appOptions(overrides...)...) }
func main() { newApp().Run() }
fx.Provide(...) block lists every constructor. fx looks at each one's
signature: application.NewCommandService(ProductWriter, IDGenerator) tells fx
"to build a *CommandService, first build a ProductWriter and an
IDGenerator." It then walks the graph until everything is satisfied.
asWriter and asReader,
expose the same *memory.ProductRepository instance under the two CQRS interfaces.
fx caches by exact type, so without these helpers it wouldn't know that the repo also
satisfies ProductWriter and ProductReader.
fx.Invoke(registerHTTPServer) kicks the whole graph off: nothing actually runs
until something is invoked. fx supplies fx.Lifecycle and
fx.Shutdowner automatically; everything else (logger, server) comes from the
providers above.
app.Run() builds the graph, runs every OnStart hook in dependency
order, blocks on SIGINT / SIGTERM, and finally runs every OnStop hook in
reverse order.
About fx.WithLogger(... fxevent.NopLogger): by default fx prints
its own startup banner -
[Fx] PROVIDE memory.NewProductRepository,
[Fx] INVOKE registerHTTPServer, and so on. Useful when you're learning fx, but
noisy in production. Setting the logger to fxevent.NopLogger silences that stream.
A real production service usually writes a small fxevent.Logger adapter that
converts fx's structured events into calls on the main logger (e.g. zerolog), so they appear
in the same JSON log stream as everything else.
For four components, manual wiring is fine. The argument for fx grows with the graph:
func main() { newApp().Run() } is the whole entry
point. Add a Mongo client, a Kafka producer, an OpenTelemetry exporter, a reconciliation worker,
a gRPC server - they're each one or two lines added to fx.Provide, and fx figures
out construction order.newApp(fx.Replace(myFakeClock, myInProcListener)).Start(ctx).
Same graph, swapped pieces, no parallel wiring code.fx.Hook. OnStop runs in reverse order automatically, so a Mongo
client constructed before the worker is also disconnected after the worker stops.
The endpoints behave the same as before, but the wiring is now declarative:
# start
go run .
# {"level":"info","addr":":8080","time":"...","message":"http server starting"}
# add a product
curl -X POST -H 'Content-Type: application/json' \
-d '{"name":"Car Wipers","price":1999}' \
http://localhost:8080/products
# 201 {"id":"7d8d17cf-909c-4173-bd90-4e777acc60aa"} <- a real UUID
# list products
curl http://localhost:8080/products
# 200 {"products":[{"id":"...","name":"Car Wipers","price":1999}]}
# graceful shutdown
# Ctrl+C
# {"level":"info","time":"...","message":"http server stopped"}
Starting from a 22-line main.go with hard-coded data, we ended up with a service that
is:
net/http for gin and
log/slog for zerolog only touched two files.
fx.Provide.That's the whole point of investing in this kind of structure: the small example you saw here is the same shape as a much larger production service. The cost is a bit more boilerplate up front; the payoff is that your business logic stays clean as the surrounding infrastructure inevitably grows.