12/04/2026 18:16pm

EP.102 Using Goroutines and Worker Pool for Managing Concurrent Connections
#WebSocket
#Worker Pool
#Golang
#Go
#Goroutines
In a WebSocket system that needs to support a large number of users, effectively managing Goroutines and Worker Pool is the key to making your system fast, stable, and resilient under heavy load.
This article will walk you through the core concepts along with practical Go code examples you can apply in production.
🌀 1. Handling Massive Connections with Goroutines
When users connect to your WebSocket server, each connection should be handled in its own Goroutine to:
- Prevent blocking the main thread
- Improve support for concurrent connections
✅ Example: Using a Goroutine per WebSocket connection
package main
import (
"fmt"
"net/http"
"github.com/gorilla/websocket"
)
var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true },
}
func handleConnection(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
http.Error(w, "Upgrade error", http.StatusInternalServerError)
return
}
defer conn.Close()
go func(c *websocket.Conn) {
for {
_, msg, err := c.ReadMessage()
if err != nil {
fmt.Println("Connection closed:", err)
break
}
fmt.Println("Received:", string(msg))
}
}(conn)
}
func main() {
http.HandleFunc("/ws", handleConnection)
fmt.Println("WebSocket Server running on :8080")
http.ListenAndServe(":8080", nil)
}
🧱 2. Use a Worker Pool to Control Goroutines
While Goroutines are lightweight, spawning too many of them can consume memory and crash the server.
Using a Worker Pool helps you:
- Save system resources
- Control the exact number of Goroutines
- Process many tasks in batch fashion
✅ Example: Worker Pool consuming messages from a job queue
type Job struct {
Message string
}
var jobs = make(chan Job, 100)
var results = make(chan string, 100)
func worker(id int, jobs <-chan Job, results chan<- string) {
for j := range jobs {
fmt.Printf("Worker %d processing: %s\n", id, j.Message)
results <- fmt.Sprintf("Done: %s", j.Message)
}
}
func main() {
// Start 5 workers
for w := 1; w <= 5; w++ {
go worker(w, jobs, results)
}
// Push 20 jobs
for i := 0; i < 20; i++ {
jobs <- Job{Message: fmt.Sprintf("Message %d", i)}
}
close(jobs)
// Collect results
for i := 0; i < 20; i++ {
fmt.Println(<-results)
}
}
⚙️ 3. Best Practices
✅ How to Combine Worker Pool with WebSocket?
- One Goroutine per WebSocket connection
- When a message is received → push it into a
jobschannel - Let the worker handle processing and respond to the client
🛡️ Things to Watch Out For
- Don’t let Goroutines leak → always use
recover()and proper error handling - Limit the number of Goroutines using semaphores or worker pools
- Use buffered channels to handle traffic spikes
🔍 Monitoring Tips
- Use
runtime.NumGoroutine()to track current Goroutine count - Use
pprofto profile memory and performance - Monitor key metrics: connection count, job queue length, success/failure rate
🎯 Challenge for You
Try building a WebSocket Server that:
✅ Supports 1,000 concurrent users
✅ Uses a Worker Pool for processing messages
✅ Monitors Goroutines, memory, and throughput
✅ Is tested with tools like Locust or Artillery
🔚 Summary
Using Goroutines + Worker Pool:
- Improves concurrency handling
- Prevents resource exhaustion
- Is proven and ready for production use
Your WebSocket Server will be stable, efficient, and scalable under heavy load.
🔜 Next EP:
EP.103 – Reducing Latency with Binary Protocol and Protobuf
Learn how to optimize data transfer in WebSocket communication with binary protocols and Google's Protobuf for faster real-time performance. See you soon! 🚀
Read more
🔵 Facebook: Superdev Academy
🔴 YouTube: Superdev Academy
📸 Instagram: Superdev Academy
🎬 TikTok: https://www.tiktok.com/@superdevacademy?lang=th-TH
🌐 Website: https://www.superdevacademy.com/en