Blog Resume

Compiling for Linux

WordCount:
1267
Reading Time:
7 min read

Welcome to the year 2026! You are a happy-go-lucky fullstack developer, and like me - surprise, this story is about me - you think you’d never see the day where C/C++ is your thing…

ehhhhhhhh sike! WRONG!

The Kool-Aid mascot breaks into your living room - you are not pleased

Your team is working on deploying AI models on the edge - i.e some poor persons device - and you are asked to start working with Llama.cpp… Yup, that’s my life right now. I never really expected myself to start compiling software for C/C++ - but here I am… learning! 🤓

Speaking of which, I recently took some training on Go and I have this Golang overview post to show for it. You can say I was kind of aching for more exposure to lower-level / compiled languages… So, it’s kind of funny how this got dropped onto my lap all of a sudden. God - you sure do have a good sense of timing, don’t you? 😅

Anyhow, if you’re reading - maybe you’re in a similar boat and you’ve never had to compile a language from source for deployment. So, let me break it down for you. We’ll take a look at what this looks like on Linux because - yeah, that’s what I’m working on. And here’s the thing, If you’re like me, you’ve probably taken deploying your code for granted! By the end of this post, you will have a good idea of what you can expect when compiling code for deployment on Linux.

Interpreted vs Compiled Languages

The distinction between “compiled code” and “interpreted code” lies in the timing and method of translation from human-readable source code into machine-executable instructions.

  • Compiled code is translated into machine code before it is executed, resulting in a standalone executable file that generally runs faster.
  • Interpreted code is translated line-by-line, at runtime, by another program called an interpreter.

That’s right - remember, all those words you typed into your editor? They have to be converted into 1s and 0s that your CPU can understand. Got an LLM? You’ll likely want to run that in a GPU environment… Notice that we’re talking about real hardware here. The circuits on your mom’s computer will be different than the one’s in your gaming PC. Converting code into electrical signals isn’t trivial. You’re just used to a high-level, interpreted language that handles all that stuff for you!

Don’t worry - breeeeaaath…

Sheldon from Big-bang Theory hyperventilating into a brown paper bag

I was also scared at first, but it’s not so bad! 😁


Compilation on Linux

Let’s take a look at what it looks like to compile C/C++ code on Linux. Here’s the breakdown, take your time to soak it in:

StageToolInputOutputWhat’s Happening
Preprocessingcpp.c / .h.iExpands macros (#define) and includes headers (#include).
Compilationgcc / icx / cc1.i.sConverts C/C++ code into CPU-specific assembly instructions.
Assemblyas.s.oTurns assembly into machine code (object files).
Linkingld.o / .a / .soExecutableConnects object files and libraries into a final program.

Remember - we’re in a new world here, so there’s some new tools to be aware of. You’ll hear about things like make or gcc. Architecture specific tools like the intel oneAPI compiler I had to work on! There are all kinds of compilers, but the roadmap is still the same. We’re 1) preprocessing the code, 2) compiling it into assembly level instruction sets which get converted into 3) machine code - the things your CPU can actually understand - and then 4) linking those executables into a final binary. Voila! 🤖

Congrats, we’ve got raw machine instructions for the CPU. The catch? Each compilation target is different. The intel oneAPI compiler? It’s built and optimized for intel. It know’s all the intel-ism’s and shortcuts. Your compiler matters, the compilation instructions matter, your target’s architecture matters, and so does the OS.

So how do we deal with all these differences? What does dealing with compiled code across different Linux distributions look like? Enter the world of portability.

Portability

The Dependency Trap

If you’ve ever tried to compile C/C++ code on one linux system, and execute it in another, you’ve probably hit something I did:

GLIBC_2.34 not found

This happens because Linux distributions ship different versions of system libraries — especially glibc (the GNU standard C Library) - the things your executable depends on. You can check the dependencies an executable has in Linux with ldd like so:

# Even something simple like the `ls` binary has dependencies
$ ldd /bin/ls
    linux-vdso.so.1 (0x00007ffe683f5000)
    libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007febe22f9000)
    libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x00007febe22ed000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007febe20f7000)
    libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007febe2048000)
    /lib64/ld-linux-x86-64.so.2 (0x00007febe235a000)

The rule? Build on the oldest system you want to support. Linux distributions are generally backwards compatible — but not forwards compatible. If you compile on a newer system, your binary may require a newer glibc version than an older system provides. The OS version effectively determines the system library versions you’ll have available. (In theory! 😉)

Static vs Dynamic Linking

Remember that thing about Linking, the last stage of the compilation process? Well, that’s how dependencies are formed. By now, you know dependencies may not always play nice across different distributions of linux. So, wouldn’t it be nice to avoid dependencies altogether? Enter statically vs dynamically linked binaries.

Static Linking

Static linking embeds all necessary library code into the final executable file at compile time. You’ll get a highly portable executable that runs without external dependencies, but you’ll also get a larger file. Why? Because the binary is self-contained. All the necessary libraries are included in the binary, so there’s no need to search for them at runtime.

Dynamic linking

Dynamic linking keeps library code in separate files (like .dll on Windows or .so on Linux). Dynamic linking is used to optimize system resources by loading shared libraries into memory only when needed, allowing multiple applications to share a single copy of a library. It reduces executable file sizes on disk, saves RAM, and simplifies software updates by allowing library updates without recompiling dependent applications.

Remember, the ldd command we used earlier? Interestingly enough, you can use ldd on itself. And guess what… it’s a static binary! Here’s the result… cool right?

# ldd has no dependencies! (★ ω ★)
ldd /bin/ldd
    not a dynamic executable

glibc vs musl : Choosing Your Foundation

Okay, so we talked about static vs dynamic linking but achieving these different compilation types in practice typically means that you’ll use a different C standard library to compile your code. On Linux there are primarily two versions of the C standard library: glibc (which we’ve discussed) and musl. Here are some takeaways between the two:

glibc

  • Default on most major distributions
  • Optimized for dynamic linking
  • Large and feature-rich
  • Can be problematic when statically linked

musl

  • Designed for static linking
  • Smaller and simpler
  • Ideal for portable binaries
  • Common in Alpine Linux

Putting it all together

We talked about how compiling C/C++ code for Linux works and why portability can become a real constraint. Now it comes down to a practical decision: what are you optimizing for — portability or performance?

If you don’t control the deployment environment, you need to assume the worst. Different distros. Different library versions. Different system configurations. In that case, optimize for maximum portability. Ship a single, self-contained binary with no external dependencies. In practice, that often means compiling against musl to reduce runtime surprises.

If you do control the deployment target — your own servers, fixed infrastructure, predictable environments — then performance and ecosystem integration matter more. Compiling against glibc typically gives you better compatibility with system tooling and access to a richer feature set. You trade some portability for tighter integration and potential performance benefits.

Here’s the bottom line: understanding the build pipeline is not optional when you’re deploying compiled applications on Linux. If you ignore compilation requirements, you’ll ship something fragile. If you know your build target, you’ll ship something robust.