Now project consists of 2 source files (utils.c and
main.c) and one header (utils.h). Buildscript
rebuilds both source files in parallel.
This example is little more complex. It uses asynchronous
compilation. cbuild_cmd_run
now is called with one additional argument - .procs = &procs.
This style of call can be seen as really strange in C, because language
does not support named arguments at its core, but cbuild_cmd_run
is not a function, but a macro, and underlying function takes just two
arguments - command and configuration structure. Macro puts all
arguments starting from second as inside struct declaration, so
designated initializers can be used as named arguments. Returning to
example’s code - second argument contains pointer to cbuild_proclist_t.
Then cbuild_procs_wait
is used as a synchronization point to wait for all jobs to finish.
Linking is done in normal, synchronous way. Interactions here are a lot
more complex that one may think at first sight, these function calls do
not simply run new process and append its PID to array of
PIDs. Instead, cbuild_cmd_run
implements something that can be called process pool. It can
take another argument .async_threads = <int>, which
specify maximum number of processes that can be simultaneously executed.
If its value is 0 then implementation-default behaviors is
used. For now, it is a number of CPU cores minus 1. Then function checks
if new process can be spawned and if not, it blocks until some process
from procs array exits. Only then new process will be
spawned and added to list of processes. On this example this behavior
may seem pointless, but when number of files waiting for compilation
increases, it allows minimizing scheduling penalties for compilation
processes while maintaining responsiveness from OS.