[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"academy-blogs-en-1-1-all-golang-ai-lab-docker-setup-guide-all--*":3,"academy-blog-translations-v8fnxcrvfcvy974":84},{"data":4,"page":71,"perPage":71,"totalItems":71,"totalPages":71},[5],{"alt":6,"collectionId":7,"collectionName":8,"content":9,"cover_image":10,"cover_image_path":11,"created":12,"created_by":13,"expand":14,"id":79,"keywords":80,"locale":52,"published_at":13,"scheduled_at":13,"school_blog":75,"short_description":81,"status":73,"title":82,"updated":83,"updated_by":13,"slug":76,"views":78},"How to setup Docker for Golang AI Microservice using Multi-stage Build","sclblg987654321","school_blog_translations","\u003Cp>Welcome back to \u003Cstrong>Golang The Series\u003C\u002Fstrong>! After shifting our mindset toward an AI-First architecture in the previous episode, it is now time to get our Infrastructure ready. Our goal today is to build a stable and scalable environment for the AI projects we will be diving into throughout this season.\u003C\u002Fp>\u003Cp>Having a solid infrastructure is like laying a strong foundation for a skyscraper. Since AI workloads are resource-intensive and libraries evolve rapidly, establishing a standardized toolset from the very beginning is crucial.\u003C\u002Fp>\u003Ch2>Why Go 1.2x + Docker?\u003C\u002Fh2>\u003Cp>In the world of AI development, where models and libraries change on a weekly basis, running code directly on your local machine (Native Host) often leads to the classic \u003Cstrong>\"It works on my machine\" \u003C\u002Fstrong>problem. This is why the pairing of Go and Docker has become our go-to standard.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cp>\u003Cstrong>Docker (Consistency &amp; Portability):\u003C\u002Fstrong> Docker ensures that your Development (Dev) and Production (Prod) environments are 100% identical. Whether your teammates are using a Mac (M4), Windows, or Linux, everyone works on the same containerized environment, effectively eliminating dependency conflicts.\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Go 1.2x (Modern &amp; Secure):\u003C\u002Fstrong> I highly recommend using Go 1.22 or higher. It introduces a new, safer way of handling Loop Variables, which significantly reduces common bugs when using Goroutines for Data Processing. Additionally, it brings performance enhancements that are vital for memory management in complex AI tasks.\u003C\u002Fp>\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Preparing a Dockerfile for AI Microservices\u003C\u002Fh2>\u003Cp>We will utilize the Multi-stage Build technique to achieve the smallest possible Docker Image (Small &amp; Lean). This method separates the compilation environment from the runtime environment, making your cloud deployments faster and more secure than using bulky, single-stage images.\u003C\u002Fp>\u003Cp>\u003Cstrong>File: Dockerfile\u003C\u002Fstrong>\u003C\u002Fp>\u003Cp>Dockerfile\u003C\u002Fp>\u003Cpre>\u003Ccode># Stage 1: Build the Go binary (Equipped with full tools for compilation)\nFROM golang:1.22-alpine AS builder\n\nWORKDIR \u002Fapp\n\n# Copy dependency files (Using * in case go.sum hasn't been generated yet)\nCOPY go.mod go.sum* .\u002F\nRUN go mod download\n\nCOPY . .\n\n# Compile into a single static binary that runs independently of the OS\nRUN CGO_ENABLED=0 GOOS=linux go build -o ai-service main.go\n\n# Stage 2: Final lightweight image (Focusing on minimal size and security)\nFROM alpine:latest  \nWORKDIR \u002Froot\u002F\n\n# Copy only the finished binary from the builder stage\nCOPY --from=builder \u002Fapp\u002Fai-service .\n\n# Create a folder for storing AI models or temporary data\nRUN mkdir data\n\nEXPOSE 8080\nCMD [\".\u002Fai-service\"]\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Go Code Example: Simple Health Check &amp; Runtime Info\u003C\u002Fh2>\u003Cp>When building an AI system, knowing your available Resources is just as important as the model itself. We will write code to verify if our environment is running the correct Go version and determine how many CPUs the system can access. This data is critical for Parallel Processing and managing task queues for your AI models.\u003C\u002Fp>\u003Cp>\u003Cstrong>File: main.go\u003C\u002Fstrong>\u003C\u002Fp>\u003Cp>Go\u003C\u002Fp>\u003Cpre>\u003Ccode>package main\n\nimport (\n\t\"fmt\"\n\t\"net\u002Fhttp\"\n\t\"runtime\"\n)\n\nfunc main() {\n\t\u002F\u002F Create a route for system status monitoring (Health Check)\n\thttp.HandleFunc(\"\u002F\", func(w http.ResponseWriter, r *http.Request) {\n\t\t\u002F\u002F Fetch system runtime information\n\t\tgoVersion := runtime.Version() \u002F\u002F Check Go version\n\t\tnumCPU := runtime.NumCPU()    \u002F\u002F Check available Logical CPUs\n\n\t\t\u002F\u002F Format the response for display\n\t\tresponse := fmt.Sprintf(\n\t\t\t\"Welcome to AI Lab!\\n\"+\n\t\t\t\"-------------------\\n\"+\n\t\t\t\"Go Version: %s\\n\"+\n\t\t\t\"Available CPUs: %d\\n\"+\n\t\t\t\"System Status: Online\",\n\t\t\tgoVersion, numCPU,\n\t\t)\n\t\t\n\t\tfmt.Fprint(w, response)\n\t})\n\n\tfmt.Println(\"🚀 AI Lab Server is running on port 8080...\")\n\t\n\t\u002F\u002F Start the server and check for initial errors\n\tif err := http.ListenAndServe(\":8080\", nil); err != nil {\n\t\tfmt.Printf(\"Failed to start server: %v\\n\", err)\n\t}\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>🎯 Challenge: Daily Mission\u003C\u002Fh2>\u003Cp>To ensure your AI Lab is truly production-ready, I want everyone to take the code above and run it through Docker yourself. Here are the short steps to transition from being a reader to a doer:\u003C\u002Fp>\u003Col>\u003Cli>\u003Cp>\u003Cstrong>Prepare Files:\u003C\u002Fstrong> Create \u003Ccode>main.go\u003C\u002Fcode> and \u003Ccode>Dockerfile\u003C\u002Fcode> in the same folder.\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Build Image:\u003C\u002Fstrong> Open your Terminal and run:\u003C\u002Fp>\u003Cp>\u003Ccode>docker build -t ai-lab-test .\u003C\u002Fcode>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Run Container:\u003C\u002Fstrong> Start it up using:\u003C\u002Fp>\u003Cp>\u003Ccode>docker run -p 8080:8080 ai-lab-test\u003C\u002Fcode>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Verify:\u003C\u002Fstrong> Navigate to \u003Ccode>localhost:8080\u003C\u002Fcode> in your browser to see your AI Lab in action!\u003C\u002Fp>\u003C\u002Fli>\u003C\u002Fol>\u003Ch3>🔥 Level Up! (Bonus Assignment)\u003C\u002Fh3>\u003Cp>For those who want to go the extra mile, try adding an \u003Ccode>\u002Fenv\u003C\u002Fcode> route to your Go code to display an Environment Variable passed from Docker.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cp>\u003Cstrong>Hint:\u003C\u002Fstrong> Use the \u003Ccode>os.Getenv(\"APP_NAME\")\u003C\u002Fcode> function in Go. When running your Docker container, add the flag \u003Ccode>-e APP_NAME=MyAILab\u003C\u002Fcode> to see it work!\u003C\u002Fp>\u003C\u002Fli>\u003C\u002Ful>\u003Cp>\u003C\u002Fp>\u003Cdiv data-type=\"horizontalRule\">\u003Chr>\u003C\u002Fdiv>\u003Ch2>Conclusion: The First Step Toward Production Standards\u003C\u002Fh2>\u003Cp>Setting up your environment with Docker and Go 1.2x today might seem like basic backend work. However, for AI-First applications, this is your defense shield against system discrepancies that occur when integrating complex AI models in the future.\u003C\u002Fp>\u003Cp>Now that our AI Lab is stable and containerized, the next critical decision is: \u003Cstrong>\"How do we move data in and out of this lab with maximum efficiency?\"\u003C\u002Fstrong>\u003C\u002Fp>\u003Ch3>Coming Up Next | EP.143: RESTful vs. RPC: The Battle for AI Communication Supremacy\u003C\u002Fh3>\u003Cp>When handling massive amounts of data—such as Vector Data or Long Context—choosing a communication protocol isn't just about preference; it’s about Performance and Scalability.\u003C\u002Fp>\u003Cp>\u003Cstrong>What we’ll explore in EP.143:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cp>\u003Cstrong>RESTful API:\u003C\u002Fstrong> Is it still king when handling long-running AI streams?\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>gRPC \u002F ConnectRPC:\u003C\u002Fstrong> Why AI Engineers are shifting toward Binary protocols.\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Latency Matters:\u003C\u002Fstrong> A comparison between JSON and Protobuf speeds when fetching AI model responses.\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Implementation:\u003C\u002Fstrong> How to write clean, manageable Go gateways for AI communication.\u003C\u002Fp>\u003C\u002Fli>\u003C\u002Ful>\u003Cp>Get your tools ready, and let's upgrade your AI system's communication channels in the next episode!\u003C\u002Fp>\u003Cp>\u003Cstrong>Follow Superdev Academy on all platforms:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cp>\u003Cstrong>🔵 Facebook: \u003C\u002Fstrong>\u003Ca target=\"_blank\" rel=\"noopener\" class=\"ng-star-inserted\" href=\"https:\u002F\u002Fwww.facebook.com\u002Fsuperdev.academy.th\">\u003Cstrong>Superdev Academy Thailand\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>🎬 YouTube: \u003C\u002Fstrong>\u003Ca target=\"_blank\" rel=\"noopener\" class=\"ng-star-inserted\" href=\"https:\u002F\u002Fwww.youtube.com\u002F@SuperdevAcademy\">\u003Cstrong>Superdev Academy Channel\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>📸 Instagram: \u003C\u002Fstrong>\u003Ca target=\"_blank\" rel=\"noopener\" class=\"ng-star-inserted\" href=\"https:\u002F\u002Fwww.instagram.com\u002Fsuperdevacademy\u002F\">\u003Cstrong>@superdevacademy\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>🎬 TikTok: \u003C\u002Fstrong>\u003Ca target=\"_blank\" rel=\"noopener\" class=\"ng-star-inserted\" href=\"https:\u002F\u002Fwww.tiktok.com\u002F@superdevacademy?lang=th-TH\">\u003Cstrong>@superdevacademy\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>🌐 Website: \u003C\u002Fstrong>\u003Ca target=\"_blank\" rel=\"noopener noreferrer\" href=\"http:\u002F\u002Fsuperdevacademy.com\">\u003Cstrong>superdevacademy.com\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\u003C\u002Fli>\u003C\u002Ful>\u003Cp>\u003C\u002Fp>","4lxzxfx8bu6_3jwg44vq6x.png","https:\u002F\u002Ftwsme-r2.tumwebsme.com\u002Fsclblg987654321\u002Ftzjc8kzpdifuql6\u002F4lxzxfx8bu6_3jwg44vq6x.png","2026-05-11 05:06:01.510Z","",{"keywords":15,"locale":46,"school_blog":56},[16,23,28,33,37,41],{"collectionId":17,"collectionName":18,"created":19,"created_by":13,"id":20,"name":21,"updated":22,"updated_by":13},"sclkey987654321","school_keywords","2026-03-04 08:20:14.253Z","ah6lvy4x8qe08l5","Golang","2026-04-10 16:07:26.172Z",{"collectionId":17,"collectionName":18,"created":24,"created_by":13,"id":25,"name":26,"updated":27,"updated_by":13},"2026-03-04 08:20:11.547Z","ey3puyme01a9bsw","Go","2026-04-10 16:07:25.893Z",{"collectionId":17,"collectionName":18,"created":29,"created_by":13,"id":30,"name":31,"updated":32,"updated_by":13},"2026-03-04 08:44:18.652Z","jr5zczy6qrxmd88","Docker","2026-04-10 16:12:43.264Z",{"collectionId":17,"collectionName":18,"created":34,"created_by":13,"id":35,"name":36,"updated":34,"updated_by":13},"2026-05-11 04:57:35.566Z","phwca73gad24kb4","AI Microservice",{"collectionId":17,"collectionName":18,"created":38,"created_by":13,"id":39,"name":40,"updated":38,"updated_by":13},"2026-05-11 04:57:42.175Z","xpsrw991lozzu5h","Multi-stage Build",{"collectionId":17,"collectionName":18,"created":42,"created_by":13,"id":43,"name":44,"updated":45,"updated_by":13},"2026-03-04 08:31:29.142Z","hrqdq7kjl5lzjmi","AI","2026-04-10 16:07:41.358Z",{"code":47,"collectionId":48,"collectionName":49,"created":50,"flag":51,"id":52,"is_default":53,"label":54,"updated":55},"en","pbc_1989393366","locales","2026-01-22 11:00:02.726Z","twemoji:flag-united-states","qv9c1llfov2d88z",false,"English","2026-04-10 15:42:46.825Z",{"category":57,"collectionId":58,"collectionName":59,"created":60,"expand":61,"id":75,"slug":76,"updated":77,"views":78},"wqxt7ag2gn7xcmk","pbc_2105096300","school_blogs","2026-05-11 04:57:58.796Z",{"category":62},{"blogIds":63,"collectionId":64,"collectionName":65,"created":66,"created_by":13,"id":57,"image":67,"image_alt":13,"image_path":68,"label":69,"name":70,"priority":71,"publish_at":72,"scheduled_at":13,"status":73,"updated":74,"updated_by":13},[],"sclcatblg987654321","school_category_blogs","2026-03-04 08:33:53.210Z","59ty92ns80w_15oc1implw.png","https:\u002F\u002Ftwsme-r2.tumwebsme.com\u002Fsclcatblg987654321\u002Fwqxt7ag2gn7xcmk\u002F59ty92ns80w_15oc1implw.png",{"en":70,"th":70},"Golang The Series",1,"2026-03-16 04:39:38.440Z","published","2026-04-25 02:32:15.470Z","v8fnxcrvfcvy974","golang-ai-lab-docker-setup-guide","2026-05-11 16:33:38.967Z",124,"tzjc8kzpdifuql6",[20,25,30,35,39,43],"Get your AI infrastructure ready! Learn how to create lean Docker images with Multi-stage builds and leverage Go 1.22+ features for a stable, production-ready AI environment.","Golang The Series EP.142: Setting up the AI Lab: Managing Environments with Docker and Go 1.2x","2026-05-11 11:02:36.310Z",{"th":76,"en":76}]