Compiling for Linux
Welcome to the year 2026! You are a happy-go-lucky fullstack developer, and like me - surprise, this story is about me - you think youâd never see the day where C/C++ is your thingâŚ
ehhhhhhhh sike! WRONG!

Your team is working on deploying AI models on the edge - i.e some poor persons device - and you are asked to start working with Llama.cpp⌠Yup, thatâs my life right now. I never really expected myself to start compiling software for C/C++ - but here I am⌠learning! đ¤
Speaking of which, I recently took some training on Go and I have this Golang overview post to show for it. You can say I was kind of aching for more exposure to lower-level / compiled languages⌠So, itâs kind of funny how this got dropped onto my lap all of a sudden. God - you sure do have a good sense of timing, donât you? đ
Anyhow, if youâre reading - maybe youâre in a similar boat and youâve never had to compile a language from source for deployment. So, let me break it down for you. Weâll take a look at what this looks like on Linux because - yeah, thatâs what Iâm working on. And hereâs the thing, If youâre like me, youâve probably taken deploying your code for granted! By the end of this post, you will have a good idea of what you can expect when compiling code for deployment on Linux.
Interpreted vs Compiled Languages
The distinction between âcompiled codeâ and âinterpreted codeâ lies in the timing and method of translation from human-readable source code into machine-executable instructions.
- Compiled code is translated into machine code before it is executed, resulting in a standalone executable file that generally runs faster.
- Interpreted code is translated line-by-line, at runtime, by another program called an interpreter.
Thatâs right - remember, all those words you typed into your editor? They have to be converted into 1s and 0s that your CPU can understand. Got an LLM? Youâll likely want to run that in a GPU environment⌠Notice that weâre talking about real hardware here. The circuits on your momâs computer will be different than the oneâs in your gaming PC. Converting code into electrical signals isnât trivial. Youâre just used to a high-level, interpreted language that handles all that stuff for you!
Donât worry - breeeeaaathâŚ

I was also scared at first, but itâs not so bad! đ
Compilation on Linux
Letâs take a look at what it looks like to compile C/C++ code on Linux. Hereâs the breakdown, take your time to soak it in:
| Stage | Tool | Input | Output | Whatâs Happening |
|---|---|---|---|---|
| Preprocessing | cpp | .c / .h | .i | Expands macros (#define) and includes headers (#include). |
| Compilation | gcc / icx / cc1 | .i | .s | Converts C/C++ code into CPU-specific assembly instructions. |
| Assembly | as | .s | .o | Turns assembly into machine code (object files). |
| Linking | ld | .o / .a / .so | Executable | Connects object files and libraries into a final program. |
Remember - weâre in a new world here, so thereâs some new tools to be aware of. Youâll hear about things like make or gcc. Architecture specific tools like the intel oneAPI compiler I had to work on! There are all kinds of compilers, but the roadmap is still the same. Weâre 1) preprocessing the code, 2) compiling it into assembly level instruction sets which get converted into 3) machine code - the things your CPU can actually understand - and then 4) linking those executables into a final binary. Voila! đ¤
Congrats, weâve got raw machine instructions for the CPU. The catch? Each compilation target is different. The intel oneAPI compiler? Itâs built and optimized for intel. It knowâs all the intel-ismâs and shortcuts. Your compiler matters, the compilation instructions matter, your targetâs architecture matters, and so does the OS.
So how do we deal with all these differences? What does dealing with compiled code across different Linux distributions look like? Enter the world of portability.
Portability
The Dependency Trap
If youâve ever tried to compile C/C++ code on one linux system, and execute it in another, youâve probably hit something I did:
GLIBC_2.34 not found
This happens because Linux distributions ship different versions of system libraries â especially glibc (the GNU standard C Library) - the things your executable depends on. You can check the dependencies an executable has in Linux with ldd like so:
# Even something simple like the `ls` binary has dependencies
$ ldd /bin/ls
linux-vdso.so.1 (0x00007ffe683f5000)
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007febe22f9000)
libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x00007febe22ed000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007febe20f7000)
libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007febe2048000)
/lib64/ld-linux-x86-64.so.2 (0x00007febe235a000)
The rule? Build on the oldest system you want to support. Linux distributions are generally backwards compatible â but not forwards compatible. If you compile on a newer system, your binary may require a newer glibc version than an older system provides. The OS version effectively determines the system library versions youâll have available. (In theory! đ)
Static vs Dynamic Linking
Remember that thing about Linking, the last stage of the compilation process? Well, thatâs how dependencies are formed. By now, you know dependencies may not always play nice across different distributions of linux. So, wouldnât it be nice to avoid dependencies altogether? Enter statically vs dynamically linked binaries.
Static Linking
Static linking embeds all necessary library code into the final executable file at compile time. Youâll get a highly portable executable that runs without external dependencies, but youâll also get a larger file. Why? Because the binary is self-contained. All the necessary libraries are included in the binary, so thereâs no need to search for them at runtime.
Dynamic linking
Dynamic linking keeps library code in separate files (like .dll on Windows or .so on Linux). Dynamic linking is used to optimize system resources by loading shared libraries into memory only when needed, allowing multiple applications to share a single copy of a library. It reduces executable file sizes on disk, saves RAM, and simplifies software updates by allowing library updates without recompiling dependent applications.
Remember, the ldd command we used earlier? Interestingly enough, you can use ldd on itself. And guess what⌠itâs a static binary! Hereâs the result⌠cool right?
# ldd has no dependencies! (â
Ď â
)
ldd /bin/ldd
not a dynamic executable
glibc vs musl : Choosing Your Foundation
Okay, so we talked about static vs dynamic linking but achieving these different compilation types in practice typically means that youâll use a different C standard library to compile your code. On Linux there are primarily two versions of the C standard library: glibc (which weâve discussed) and musl. Here are some takeaways between the two:
glibc
- Default on most major distributions
- Optimized for dynamic linking
- Large and feature-rich
- Can be problematic when statically linked
musl
- Designed for static linking
- Smaller and simpler
- Ideal for portable binaries
- Common in Alpine Linux
Putting it all together
We talked about how compiling C/C++ code for Linux works and why portability can become a real constraint. Now it comes down to a practical decision: what are you optimizing for â portability or performance?
If you donât control the deployment environment, you need to assume the worst. Different distros. Different library versions. Different system configurations. In that case, optimize for maximum portability. Ship a single, self-contained binary with no external dependencies. In practice, that often means compiling against musl to reduce runtime surprises.
If you do control the deployment target â your own servers, fixed infrastructure, predictable environments â then performance and ecosystem integration matter more. Compiling against glibc typically gives you better compatibility with system tooling and access to a richer feature set. You trade some portability for tighter integration and potential performance benefits.
Hereâs the bottom line: understanding the build pipeline is not optional when youâre deploying compiled applications on Linux. If you ignore compilation requirements, youâll ship something fragile. If you know your build target, youâll ship something robust.