Beyond %d: The Hidden World of printf Formats That Most Devs Never Use
You know what's embarrassing? Copy-pasting chmod 755
from Stack Overflow for the 47th time without knowing what those numbers actually mean. Or printing memory addresses as decimals and wondering why 140732920755216
doesn't tell you jack shit about where your program crashed.
We're all guilty of printf laziness. We learn %d
and %s
in our CS class and call it a day, while other format specifiers sit there like neglected tools in a garage. Meanwhile, your CPU is literally thinking in binary and hex, and you're forcing it to translate everything to decimal like some kind of numerical middleman.
Time to stop being that person who debugs in the wrong number base. Your sanity depends on it.

The Problem: Debugging in the Wrong Base
So I was working on this bloom filter implementation (yeah, I know, everyone does bloom filters). My hash functions were spitting out numbers and I had no clue what was happening with the collision rates.
// What I was doing - completely uselessprintf("Hash value: %d\n", hash);// Output: 2847592847// Me: "What the hell does this even mean?"
Then my friend introduced me to hexadecimal printing (chatGPT, they call him 🥴):
// The game changerprintf("Hash value: 0x%x\n", hash);// Output: 0xa9bad18f// Me: "Oh! I can see the bit patterns now!"
Suddenly, I could see the actual bit distribution, spot the collision patterns, and fix my hash function. This is why number bases matter. (Also why I should've paid more attention in systems programming 🙃)
The Cast of Characters
%x
and %X
- Hexadecimal (Base 16)
Not gonna lie, this one changed my debugging game completely. I used to stare at memory addresses like 140732920755216
and think "well, that's a number I guess." Then someone showed me hex(again the same guy) and suddenly 0x7fff5fbff010
actually tells a story.
You'll use this for:
- Memory addresses (finally readable)
- Bit manipulation stuff
- Color codes (now I know why CSS uses #ff0000)
- Hash values that don't look like phone numbers
int value = 255;printf("Hex (lowercase): %x\n", value); // ffprintf("Hex (uppercase): %X\n", value); // FFprintf("With prefix: 0x%x\n", value); // 0xff
Pro tip: Use %#x
for automatic 0x
prefix. Saves you from typing it every damn time. (Learned this embarrassingly late in my coding journey 😅)
%o
and %O
- Octal (Base 8)
Okay, real talk - I thought octal was dead until I had to deal with Unix file permissions. Turns out those chmod
commands aren't just random numbers you copy from Stack Overflow (guilty as charged 🙋♂️).
int perms = 0644;printf("File permissions: %o\n", perms); // 644printf("With prefix: %#o\n", perms); // 0644
So when you see chmod 755
, it's actually:
- 7 (111 in binary) = owner can read + write + execute
- 5 (101 in binary) = group can read + execute
- 5 (101 in binary) = others can read + execute
Makes way more sense than memorizing magic numbers, right? (Though I still Google "chmod permissions" sometimes 🤷♂️)
%b
- Binary
Standard C doesn't have this (or maybe it does, go check yourself an tell me in comments), but Go and most modern languages do. This one's clutch when you're doing bit manipulation and your brain hurts from converting everything manually.
// Go examplevalue := 42fmt.Printf("Binary: %b\n", value) // 101010fmt.Printf("8-bit: %08b\n", value) // 00101010
// But wait... this actually works on my machine?int value = 42;printf("Binary: %b\n", value); // 101010 - wtf?
WARNING
Something's not adding up here. I'll figure this out and update you later... (or drop your findings in the comments if you know what's going on).
When These Actually Matter
Network Debugging
// IP address debugginguint32_t ip = 0xC0A80001; // 192.168.0.1printf("IP (hex): 0x%08X\n", ip); // 0xC0A80001printf("IP (decimal): %u\n", ip); // 3232235521printf("IP (octal): %o\n", ip); // 30052000001
// Which one screams "192.168.0.1" at you?// The hex one, obviously.
I spent way too long staring at decimal IP representations wondering why my network code was broken. Turns out printing them in hex immediately shows you the byte structure. (Network programming is hard enough without making it harder on yourself 😮💨)
Bit Manipulation
// Setting flags#define FLAG_READ 0x01 // 00000001#define FLAG_WRITE 0x02 // 00000010#define FLAG_EXECUTE 0x04 // 00000100
int permissions = FLAG_READ | FLAG_WRITE;printf("Permissions: 0x%02x\n", permissions); // 0x03printf("Permissions: %08b\n", permissions); // 00000011 (if supported)
When you're doing bitwise operations, seeing the actual bits saves your sanity. Trust me on this one. (Or don't, and spend hours debugging like I did 🤡)
Figuring Out Endianness (Because Your CPU Might Be Backwards)
// Is my system little-endian or big-endian?uint32_t test = 0x12345678;uint8_t *bytes = (uint8_t*)&test;
printf("Full value: 0x%08x\n", test);for(int i = 0; i < 4; i++) { printf("Byte %d: 0x%02x\n", i, bytes[i]);}// Little-endian: 78 56 34 12// Big-endian: 12 34 56 78
This saved my ass when I was doing network programming and couldn't figure out why my packets were garbage. Turns out network byte order is big-endian, but my machine was little-endian. Who knew? (bluffing 🙃)
Other Formatting Tricks Worth Knowing
Making Your Output Look Less Like Garbage
int val = 42;printf("%8d\n", val); // " 42" (right-aligned)printf("%-8d\n", val); // "42 " (left-aligned)printf("%08d\n", val); // "00000042" (zero-padded)printf("%08x\n", val); // "0000002a" (hex with padding)
Zero-padding is clutch when you're printing memory dumps or want things to line up nicely. Makes your output look professional instead of like a toddler's finger painting.
Floating Point
double pi = 3.14159265359;printf("%.2f\n", pi); // 3.14printf("%.10f\n", pi); // 3.1415926536printf("%e\n", pi); // 3.141593e+00 (scientific notation)
Scientific notation is great when you're dealing with really big or really small numbers and don't want to count zeros like some kind of human calculator.
Different Languages, Same Concept
Go's Extended Family
// Go gives you some extra goodiesfmt.Printf("Binary: %b\n", 42) // 101010fmt.Printf("Unicode: %U\n", 'A') // U+0041fmt.Printf("Quoted string: %q\n", "hi") // "hi"fmt.Printf("Type: %T\n", 42) // int
That %T
one is super handy when you're debugging type issues and can't figure out what the hell Go thinks your variable is. (Go's type system can be... opinionated 😤)
Python's f-string
value = 255print(f"Hex: {value:x}") # ffprint(f"Hex upper: {value:X}") # FFprint(f"Binary: {value:b}") # 11111111print(f"Octal: {value:o}") # 377print(f"Padded: {value:08x}") # 000000ff
Python's f-strings are honestly way cleaner than printf. Fight me. (But also, printf is everywhere so learn it anyway)
The Unicode
Here's where shit gets weird. Ever wondered how your computer stores "🚀" vs "A"? Spoiler alert: it's more complicated than you think.
// ASCII characterchar ascii = 'A';printf("ASCII 'A': %d (0x%02x)\n", ascii, ascii); // 65 (0x41)
// Unicode is a whole different story// 🚀 is U+1F680 in Unicode - needs multiple bytes in UTF-8!
UTF-8 encoding (the thing that makes the internet work):
- ASCII stuff (0-127): 1 byte
- European languages, Arabic, etc.: 2 bytes
- Chinese, Japanese, most emoji: 3 bytes
- Weird symbols, more emoji: 4 bytes
TIP
Random fact: UTF-8 was created by Rob Pike and Ken Thompson - the same guys who made Go. Pretty cool legacy.
Why This Actually Matters for Performance
Look, different number bases aren't just for showing off. They map to how computers actually work:
- Hexadecimal: Each digit = exactly 4 bits. Perfect for memory dumps.
- Octal: Each digit = exactly 3 bits. Compact for certain operations.
- Binary: What your CPU actually sees. No translation needed.
When you're doing low-level stuff, thinking in hex or binary can save you from making dumb mistakes. (Trust me, I've made plenty 🤦♂️)
// Bitwise operations make sense in binary/hexuint8_t flags = 0b10110101; // Binary literal (C99+)uint8_t mask = 0x0F; // Hex literal
printf("Original: 0x%02x (%b)\n", flags, flags); // Wait...printf("Mask: 0x%02x (%b)\n", mask, mask); // This works?printf("Result: 0x%02x (%b)\n", flags & mask, flags & mask);
Plot twist: Remember when I said standard C doesn't have %b
? Well, turns out I was wrong (kinda). Some compilers like GCC actually support %b
as an extension, even though it's not in the official C standard. So if you're using GCC, you might get lucky. Other compilers? You're on your own.
This is why testing on different systems is important, kids. What works on your machine might not work everywhere. (Learning this the hard way builds character, apparently 😅)
Start Using These Tomorrow
Next time you're debugging, try this:
- Use
%x
for memory addresses - way easier to read - Use
%o
for file permissions - finally understand chmod - Use
%b
for bit operations - see what's actually happening - Print values in multiple bases - sometimes one just clicks
int mystery_value = get_mysterious_value();printf("Value: %d (dec), 0x%x (hex), %o (oct)\n", mystery_value, mystery_value, mystery_value);
Do this once and you'll never go back to staring at random decimal numbers. (Well, mostly. Old habits die hard 🙃)
Bottom Line
Stop limiting yourself to %d
. These aren't just academic bullshit - they're tools that'll actually make your debugging life easier. Plus, using hex makes you look like you know what you're doing in code reviews. (Fake it till you make it, right? 😏)
Your 3 AM debugging self will thank you later. (And probably curse you less often too)
Got any other printf tricks that saved your ass? Let me know. To the comments now. brrrrr...
If you are here, leave a reaction bruh, takes nothing except a github account.