Building Resilient Microservices with Go
A deep dive into creating fault-tolerant, scalable microservices using Go, Docker, and Kubernetes
Building Resilient Microservices with Go 🐹
After years of working with monolithic applications, I decided to take the plunge into microservices architecture. The journey has been both challenging and rewarding, teaching me valuable lessons about distributed systems, fault tolerance, and the importance of proper observability.
Why Microservices?
The decision to break down our monolithic application wasn’t taken lightly. We were facing several challenges:
- Scalability bottlenecks - Different parts of our app had different scaling requirements
- Technology diversity - We wanted to use the right tool for each job
- Team autonomy - Multiple teams needed to work independently
- Deployment complexity - A single bug could bring down the entire system
The Go Advantage
Go has become my language of choice for microservices due to several key advantages:
Performance & Concurrency
package main
import (
"context"
"fmt"
"net/http"
"sync"
"time"
)
type Service struct {
client *http.Client
wg sync.WaitGroup
}
func (s *Service) ProcessRequests(ctx context.Context, requests []Request) error {
results := make(chan Result, len(requests))
for _, req := range requests {
s.wg.Add(1)
go func(r Request) {
defer s.wg.Done()
result := s.processRequest(ctx, r)
select {
case results <- result:
case <-ctx.Done():
return
}
}(req)
}
go func() {
s.wg.Wait()
close(results)
}()
// Process results
for result := range results {
if result.Error != nil {
return result.Error
}
}
return nil
}
Built-in HTTP Server
func main() {
mux := http.NewServeMux()
// Health check endpoint
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("OK"))
})
// Metrics endpoint
mux.HandleFunc("/metrics", promhttp.Handler().ServeHTTP)
// API endpoints
api := NewAPIHandler()
mux.Handle("/api/", http.StripPrefix("/api", api))
server := &http.Server{
Addr: ":8080",
Handler: mux,
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
}
log.Fatal(server.ListenAndServe())
}
Docker & Containerization
Containerizing Go applications is straightforward, but there are some best practices to follow:
Multi-stage Dockerfile
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Runtime stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates tzdata
WORKDIR /root/
COPY --from=builder /app/main .
COPY --from=builder /app/config ./config
EXPOSE 8080
CMD ["./main"]
Docker Compose for Local Development
version: '3.8'
services:
user-service:
build: ./user-service
ports:
- "8081:8080"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/users
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
order-service:
build: ./order-service
ports:
- "8082:8080"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/orders
- USER_SERVICE_URL=http://user-service:8080
depends_on:
- db
- user-service
db:
image: postgres:15
environment:
POSTGRES_DB: microservices
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Service Communication Patterns
Synchronous Communication with Circuit Breakers
package main
import (
"context"
"fmt"
"net/http"
"time"
"github.com/sony/gobreaker"
)
type UserServiceClient struct {
client *http.Client
breaker *gobreaker.CircuitBreaker
baseURL string
}
func NewUserServiceClient(baseURL string) *UserServiceClient {
return &UserServiceClient{
client: &http.Client{
Timeout: 5 * time.Second,
},
baseURL: baseURL,
breaker: gobreaker.NewCircuitBreaker(gobreaker.Settings{
Name: "user-service",
MaxRequests: 3,
Interval: 10 * time.Second,
Timeout: 30 * time.Second,
ReadyToTrip: func(counts gobreaker.Counts) bool {
return counts.ConsecutiveFailures >= 3
},
}),
}
}
func (c *UserServiceClient) GetUser(ctx context.Context, userID string) (*User, error) {
result, err := c.breaker.Execute(func() (interface{}, error) {
req, err := http.NewRequestWithContext(ctx, "GET",
fmt.Sprintf("%s/users/%s", c.baseURL, userID), nil)
if err != nil {
return nil, err
}
resp, err := c.client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("user service returned status %d", resp.StatusCode)
}
// Parse response...
return &User{}, nil
})
if err != nil {
return nil, err
}
return result.(*User), nil
}
Asynchronous Communication with Message Queues
package main
import (
"context"
"encoding/json"
"log"
"github.com/rabbitmq/amqp091-go"
)
type EventPublisher struct {
conn *amqp091.Connection
channel *amqp091.Channel
}
func NewEventPublisher(amqpURL string) (*EventPublisher, error) {
conn, err := amqp091.Dial(amqpURL)
if err != nil {
return nil, err
}
ch, err := conn.Channel()
if err != nil {
return nil, err
}
return &EventPublisher{
conn: conn,
channel: ch,
}, nil
}
func (ep *EventPublisher) PublishUserCreated(ctx context.Context, user *User) error {
event := UserCreatedEvent{
UserID: user.ID,
Email: user.Email,
CreatedAt: user.CreatedAt,
}
body, err := json.Marshal(event)
if err != nil {
return err
}
return ep.channel.PublishWithContext(
ctx,
"user.events",
"user.created",
false,
false,
amqp091.Publishing{
ContentType: "application/json",
Body: body,
},
)
}
Observability & Monitoring
Structured Logging
package main
import (
"context"
"log/slog"
"os"
)
func main() {
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
slog.SetDefault(logger)
// Usage
ctx := context.Background()
slog.InfoContext(ctx, "user created",
"user_id", "123",
"email", "user@example.com",
"duration_ms", 45,
)
}
Metrics Collection
package main
import (
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
httpRequestsTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests",
},
[]string{"method", "endpoint", "status_code"},
)
httpRequestDuration = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Duration of HTTP requests in seconds",
Buckets: prometheus.DefBuckets,
},
[]string{"method", "endpoint"},
)
)
func metricsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
// Wrap ResponseWriter to capture status code
wrapped := &responseWriter{ResponseWriter: w, statusCode: 200}
next.ServeHTTP(wrapped, r)
duration := time.Since(start).Seconds()
httpRequestsTotal.WithLabelValues(
r.Method,
r.URL.Path,
fmt.Sprintf("%d", wrapped.statusCode),
).Inc()
httpRequestDuration.WithLabelValues(
r.Method,
r.URL.Path,
).Observe(duration)
})
}
Testing Strategies
Unit Testing
package main
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
type MockUserService struct {
mock.Mock
}
func (m *MockUserService) GetUser(ctx context.Context, userID string) (*User, error) {
args := m.Called(ctx, userID)
return args.Get(0).(*User), args.Error(1)
}
func TestOrderService_CreateOrder(t *testing.T) {
mockUserService := new(MockUserService)
orderService := NewOrderService(mockUserService)
// Setup mock expectations
mockUserService.On("GetUser", mock.Anything, "123").
Return(&User{ID: "123", Email: "test@example.com"}, nil)
// Test
order, err := orderService.CreateOrder(context.Background(), "123", OrderRequest{
Items: []OrderItem{{ProductID: "prod1", Quantity: 2}},
})
// Assertions
assert.NoError(t, err)
assert.NotNil(t, order)
assert.Equal(t, "123", order.UserID)
mockUserService.AssertExpectations(t)
}
Integration Testing
func TestUserServiceIntegration(t *testing.T) {
// Setup test database
db := setupTestDB(t)
defer cleanupTestDB(t, db)
// Setup test server
server := httptest.NewServer(setupTestServer(db))
defer server.Close()
// Test user creation
userData := map[string]interface{}{
"email": "test@example.com",
"name": "Test User",
"password": "password123",
}
resp, err := http.Post(server.URL+"/users", "application/json",
strings.NewReader(marshalJSON(t, userData)))
require.NoError(t, err)
require.Equal(t, http.StatusCreated, resp.StatusCode)
// Verify user was created in database
var user User
err = db.QueryRow("SELECT id, email, name FROM users WHERE email = $1",
"test@example.com").Scan(&user.ID, &user.Email, &user.Name)
require.NoError(t, err)
assert.Equal(t, "test@example.com", user.Email)
}
Lessons Learned
1. Start Simple
Don’t over-engineer from the beginning. Start with a monolith and extract services as needed.
2. Data Consistency
Choose the right consistency model for each use case. Not everything needs to be strongly consistent.
3. Observability First
Invest in monitoring, logging, and tracing from day one. You’ll thank yourself later.
4. Failure is Inevitable
Design for failure. Circuit breakers, retries, and graceful degradation are essential.
5. Team Communication
Microservices require excellent communication between teams. Invest in documentation and API contracts.
The Results
After 6 months of microservices development:
- 50% reduction in deployment time
- 99.9% uptime across all services
- Independent scaling of different components
- Faster feature development with team autonomy
The journey to microservices isn’t easy, but with the right tools, patterns, and mindset, it can transform your application architecture for the better.
What’s your experience with microservices? Share your war stories in the comments or reach out on GitHub!