Skip to content

RTPEngine project architecture

Donat Zenichev edited this page Feb 26, 2023 · 9 revisions

Overview

This is the main page describing the architecture of the RTPEngine project.


Processing of RTP/RTCP packets

An incoming RTP is initially received by the stream_fd, which directly links it to the correlated packet_stream.

Each packet_stream then links to an upper-level call_media — generally there are two packet_stream objects per call_media, one for RTP and one for RTCP. The call_media then links to a call_monologue, which corresponds to a participant of the call.

The handling of packets is implemented in media_socket.c and always starts with the stream_packet().

This operates on the originating stream_fd (fd which received the packet) and on its linked packet_stream and eventually proceeds to going through the list of sinks, either rtp_sinks or rtcp_sinks (egress handling) and uses the contained sink_handler objects, which points to the destination packet_stream, see:

stream_fd -> packet_stream -> list of sinks -> sink_handler -> dst packet_stream

To summarize, RTP/RTCP packet forwarding then consists of:

  • Determining whether to use rtp_sinks or rtcp_sinks
  • Iterating this list, which is a list of sink_handler objects
  • Each sink handler has an output packet_stream and a streamhandler
  • Calling the stream handler's input function to decrypt, if needed (but, nuance[1])
  • Checking the RTP payload type and calling the codec handler, if there is one (e.g. for transcoding)
  • Calling the streamhandler's output function to encrypt, if needed
  • Sending the packet out to the output packet stream
  • Repeat

[1] In practice the stream handler's input function (step 4) is called only once, before going into the loop to send packets to their destinations.

The reason for that — there's a duplication of input functions in the stream handlers, this is:

  1. Legacy, because previously used to have a single output for each packet stream (and this part hasn't been rewritten yet)
  2. Theoretically it's possible to save the effort of re-encrypting SRTP > SRTP forwarding, if some of the outputs are plain RTP, meanwhile other outputs are SRTP pass-through (but that part isn't implemented).

As for setting up the kernel's forwarding chain: It's basically the same process of going through the list of sinks and building up the structures for the kernel module, with the sink handlers taking up the additional role of filling in the structures needed for decryption and encryption.

Other important notes:

Incoming packets (ingress handling):
sfd->socket.local: the local IP/port on which the packet arrived
sfd->stream->endpoint: adjusted/learned IP/port from where the packet was sent
sfd->stream->advertised_endpoint: the unadjusted IP/port from where the packet was sent. These are the values present in the SDP

Outgoing packets (egress handling):
sfd->stream->rtp_sink->endpoint: the destination IP/port
sfd->stream->selected_sfd->socket.local: the local source IP/port for the outgoing packet

Handling behind the NAT:
If the RTPengine runs behind the NAT and local addresses are configured with different advertised endpoints,
the SDP would not contain the address from ...->socket.local, but rather from sfd->local_intf->spec->address.advertised (of type sockaddr_t).
The port will be the same.

Binding to sockets:
Sockets aren't indiscriminately bound to INADDR_ANY (or rather in6addr_any),
but instead are always bound to their respective local interface address and with the correct address family.

Side effects of this are that in multi-homed environments, multiple sockets must be opened
(one per interface address and family), which must be taken into account when considering RLIMIT_NOFILE values.
As a benefit, this change allows rtpengine to utilize the full UDP port space per interface address, instead of just one port space per machine.

Marking the packet stream

The packet_stream itself (or the upper-level call_media) can be marked as:

  • SRTP endpoint
  • ICE endpoint
  • send/receive-only

This is done through the transport_protocol element and various bit flags.
Currently existing transport_protocols:

  • RTP AVP
  • RTP SAVP
  • RTP AVPF
  • RTP SAVPF
  • UDP TLS RTP SAVP
  • UDP TLS RTP SAVPF
  • UDPTL
  • RTP SAVP OSRTP
  • RTP SAVPF OSRTP

For more information about the RTP packets processing see RTP packets processing of the “Mutexes, locking, reference counting” section.


Call monologue and handling of call subscriptions

Each call_monologue (call participant) contains a list of subscribers and subscriptions,
which are other call_monologue's. These lists are mutual.
A regular A/B call has two call_monologue objects with each subscribed to the other.
So that, one call_monologue = half of a dialog.

Subscribers of certain monologue — are other monologues, which will receive the media sent by the source monologue.

The list of subscriptions is a list of call_subscription objects which contain flags and attributes.

NOTE: Flags and attributes can be used, for example, to mark a subscription as an egress subscription.

Picture 1:
Picture 1:
On the diagram above you can clearly see how monologues and hence subscriptions are correlated.

A handling of call subscriptions is implemented in the call.c file.
In this regard, one of the most important functions here is __add_subscription().

A transfer of flags/attributes from the subscription (call_subscription) to the correlated sink handlers (sink_handler objects list) is done using the __init_streams() through the __add_sink_handler().

Signaling events

During signalling events (e.g. offer/answer handshake), the list of subscriptions for each call_monologue is used to create the list of rtp_sink and rtcp_sink sinks given in each packet_stream. Each entry in these lists is a sink_handler object, which again contains flags and attributes.

NOTE: ‘sink’ - is literally a sink, as in drain. Basically an output or destination for a media/packet stream.

During the execution, the function __update_init_subscribers() goes through the list of subscribers,
through each media section and each packet stream, and sets up the list of sinks for each packet stream, via __init_streams().

This populates the list of rtp_sinks and rtcp_sinks for each packet stream.

Flags and attributes from the call_subscription objects are copied into the according sink_handler object(s).
During actual packet handling and forwarding, only the sink_handler objects (and the packet_stream objects they related to) are used, not the call_subscription objects.

Processing of signaling event (offer/answer) to the RTPEngine,
in terms of using functionality looks as following (for one call_monologue):

  • signaling event begins (offer / answer)
  • monologue_offer_answer() -> __get_endpoint_map() -> stream_fd_new()
  • in stream_fd_new() we initialize a new poller item (poller_item),
    and then using the stream_fd_readable(), the pi.readable member is set:
pi.readable = stream_fd_readable;
  • in the main.h header, there is a globally defined poller, to which we add a newly initialized poller item, like this:\
struct poller *p = rtpe_poller;
poller_add_item(p, &pi);
  • poller will be used to later destroy this reference-counted object
  • later, the stream_fd_readable() in its turn, will trigger the stream_packet() for RTP/RTCP packets processing
  • monologue_offer_answer() now continues processing and calls update_init_subscribers() to go through the list of subscribers (through related to them media and each packet stream) and using __init_streams() sets up sinks for each stream.

Picture 2:
Picture 2:

Then, each sink_handler points to the correlated egress packet_stream and also to the streamhandler.
The streamhandler in its turn is responsible for handling an encryption, primarily based on the transport protocol used.

There's a matrix (a simple lookup table) of possible stream handlers in media_socket.c, called __sh_matrix.
Stream handlers have an input component, which would do decryption, if needed, as well as an output component for encryption.

For more information regarding signaling events, see Signaling events processing of the “Mutexes, locking, reference counting” section.


Mutexes, locking, reference counting

Struct call

The main parent structure of all call-related objects (packet streams, media sections, sockets and ports, codec handlers, etc) is the struct call. Almost all other structures and objects are nested underneath a call.

Picture 3:
Picture 3:

The call structure contains a master read-write lock: rwlock_t master_lock, that protects the entire call and all contained objects.

With the exception of a few read-only fields, all fields of call and any nested sub-object must only be accessed with the master_lock held as a read lock, and must only be modified with the master_lock held as a write lock.

The rule of thumb therefore is: during signalling events acquire a write-lock, and during packet handling acquire a read-lock.

Object/struct hierarchy and ownership

The logical object hierarchy is callcall_monologuecall_mediapacket_streamstream_fd.
Each parent object can contain multiple of the child objects (e.g. multiple call_media per call_monologue) and
each child object contains a back pointer to its parent object. Some objects contain additional back pointers for convenience, e.g. from call_media directly to call. These are not reference counted as the child objects are entirely owned by the call,
with the exception of stream_fd objects.

The parent call object contains one list (as GQueue) for each kind of child object.
These lists exist for convenience, but most importantly as primary containers. Every child object owned by the call is added to its respective list exactly once, and these lists are what is used to free and release the child objects during call teardown.
Additionally most child objects are given a unique numeric ID (unsigned int unique_id) and it’s the position in the call’s GQueue that determines the value of this ID (which is unique per call and starts at zero).

Reference-counted objects

Call objects are reference-counted through the struct obj contained in the call.

The struct obj member must always be the first member in a struct. Each obj is created with a cleanup handler (see obj_alloc()) and this handler is executed whenever the reference count drops to zero. References are acquired and released through obj_get() and obj_put() (plus some other wrapper functions).

The code bases uses the “entry point” approach for references, meaning that each entry point into the code (coming from an external trigger, such as a received command or packet) must hold a reference to some reference-counted object.

Signaling events

The main entry point into call objects for signalling events is the call-ID:
therefore the main entry point is the global hash table rtpe_callhash (protected by rtpe_callhash_lock),
which uses call-IDs as keys and call objects as values, while holding a reference to each contained call. The function call_get() and its sibling functions perform the lookup of call via its call-ID and return a new reference to the call object (i.e. with the reference count increased by one).

Therefore the code must use obj_put() on the call after call_get() and after it's done operating on the object.

RTP packets processing

Another entry point into the code is RTP packets received on a media port. Media ports are contained in a struct stream_fd and because this is also an entry-point object, it is also reference counted (contains a struct obj).

NOTE: The object responsible for maintaining and holding the entry-point references is the poller.

Each stream_fd object also holds a reference to the call it belongs to
(which in turn holds references to all stream_fd objects it contains, which makes these circular references).
Therefore stream_fd objects are only released, when they’re removed from the poller and also removed from the call (and conversely, the call is only released when it has been dissociated from all stream_fd objects, which happens during call teardown).

Non-reference-counted objects

Other structs and objects nested underneath a call (e.g. call_media) are not reference counted as they're not entry points into the code, and accordingly also don’t hold references to the call even though they contain pointers to it.
These are completely owned by the call and are therefore released when the call is destroyed.

Packet_stream mutex

Each packet_stream contains two additional mutexes (in_lock and out_lock).
These are only valid if the corresponding call’s master_lock is held at least as a read-lock.

The in_lock protects fields relevant to packet reception on that stream,
while the out_lock protects fields relevant to packet egress. This allows packet handling on multiple ports and streams belonging to the same call to happen at the same time.


Kernel forwarding

The kernel forwarding of RTP/RTCP packets is handled in the xt_RTPENGINE.c / xt_RTPENGINE.h.

The linkage between user-space and kernel module is in the kernelize_one() (media_socket.c),
which populates the struct that is passed to the kernel module.

To be continued..


Call monologue and Tag concept

The concept of tags is taken directly from the SIP protocol.
Each call_monologue has a tag member, which can be empty or filled.

The tag value will taken:

  • for the caller - From-tag (so will be always filled)
  • for the callee - To-tag (but will be empty, until first reply with a given Tag is received)

Things are getting slightly more complicated, when there is a branched call.
In this sitaution, the same offer is sent to multiple receivers, possibly with different options.
At this point of time, multiple monologues are created in the RTPEngine and all of them without a known tag value
(essentially without To-tag).

At this point they all are distinguished by the via-branch value.
And when the answer comes through, the via-branch is used to match the monologue,
and then we assign the str tag value to this particular call_monologue.

In a simplified view of things, we have the following:

  • A (caller) = monologue A = From-tag (session initiation)
  • B (callee) = monologue B = To-tag (183 / 200OK)

NOTE: Nameless branches are branches (call_monologue objects) that were created from an offer but that haven't seen an answer yet.
Once an answer is seen, the tag becomes known.

NOTE: From-tag and To-tag strictly correspond to the directionality of message, not to the actual SIP headers.
In other words, the From-tag corresponds to the monologue sending this particular message, even if the tag is actually taken from the To header’s tag of the SIP message, as it would be in a 200 OK for example.


Flags and options parsing

Flags

There are few helper functions to iterate the list of flags and they use callback functions.
The top-level one is call_ng_main_flags(), which is used as a callback from call_ng_dict_iter().

call_ng_main_flags() then uses the call_ng_flags_list() helper to iterate through the contained lists, and uses different callbacks depending on the entry.

For example, for the flags:[] entry, it uses call_ng_flags_list(), which means call_ng_flags_flags() is called once for each element contained in the list.

Let’s assume that the given flag is the SDES-no-NULL_HMAC_SHA1_32.
Consider the picture:
Picture 4:
Picture 4:

NOTE: Some of these callback functions have two uses. For example ng_sdes_option() is used as a callback for the "SDES":[] list (via the call_ng_flags_list()), and also for flags found in the "flags":[] list that starts with the "SDES-" prefix via call_ng_flags_prefix().


Tests

Adding of new tests is a required procedure, which allows us to cover fixes/changes/feature being added into the RTPEngine.
They make sure, that:

  • first, new changes being added are reflecting the intention of these change, and hence give expected results
  • second, they will make sure, that this expected behavior (in this very scope) won’t get broken in the future, by newer changes. And even if, we will notice that and will make sure to fix it.

The main folder, as regularly, is: t/ \
Here there is a bunch of files written in Perl, Python and C, dedicated for different kind of tests.

NOTE: They are being run with make check and during packaging. Nevertheless, not everything in that directory is actually run as part of make check (some are for manual testing).

These tests actually spawn a real RTPengine process in a fake network environment (see tests-preload.c and its invocation in the makefile, and also the command-line options given at the start of each of these scripts) and then facilitate real SDP offer/answer exchanges and test real RTP forwarding against it.

This is the closest possible way to simulate real SIP calls. The code supporting these tests is written using a few Perl libraries stored in perl/ folder of the project, which are able to do the signalling, SRTP, ICE, etc.

Most importantly, there are some unit tests (e.g. aead-aes-crypt or test-transcode), but the most comprehensive test scripts are the ones called auto-daemon-tests.

Let’s take as an example the: auto-daemon-tests.pl
This one has a huge amount of basic tests related to mostly general things RTPEngine does.

Let’s have a look into one of the tests, in this case ‘SDP version force increase':

new_call;

# there is no 'monologue->last_out_sdp', but the version still gets increased
offer('SDP version force increase', { replace => ['force-increment-sdp-ver'] }, <<SDP);
v=0
o=- 1545997027 1 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio 2000 RTP/AVP 0
c=IN IP4 198.51.100.1
----------------------------
v=0
o=- 1545997027 2 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio PORT RTP/AVP 0
c=IN IP4 203.0.113.1
a=rtpmap:0 PCMU/8000
a=sendrecv
a=rtcp:PORT
SDP

# there is 'monologue->last_out_sdp' and it's equal to the newly given SDP,
# but the version still gets increased
offer('SDP version force increase', { replace => ['force-increment-sdp-ver'] }, <<SDP);
v=0
o=- 1545997027 2 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio 2000 RTP/AVP 0
c=IN IP4 198.51.100.1
----------------------------
v=0
o=- 1545997027 3 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio PORT RTP/AVP 0
c=IN IP4 203.0.113.1
a=rtpmap:0 PCMU/8000
a=sendrecv
a=rtcp:PORT
SDP

# there is 'monologue->last_out_sdp' and it's not equal to the newly given SDP,
# and the version gets increased, as if that would be increased with 'sdp-version'.
offer('SDP version force increase', { replace => ['force-increment-sdp-ver'] }, <<SDP);
v=0
o=- 1545997027 3 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio 2002 RTP/AVP 0
c=IN IP4 198.51.100.1
----------------------------
v=0
o=- 1545997027 4 IN IP4 198.51.100.1
s=tester
t=0 0
m=audio PORT RTP/AVP 0
c=IN IP4 203.0.113.1
a=rtpmap:0 PCMU/8000
a=sendrecv
a=rtcp:PORT
SDP

It is dedicated to check, whether under any possible conditions, the flag force-increment-sdp-ver (when calling the rtpengine_offer()), will increase the sessions version of this SDP.

The syntax here is:

  • new_call; - start a new test procedure, kinda new call
  • 'SDP version force increase' - is a test’s name
  • replace => ['force-increment-sdp-ver'] - is a given flag (emulates as if we would add this flag, when calling the rtpengine_offer())
  • first instance of the SDP (so before ----------------------------) - is what we hand to the RTPEngine
  • second instance after that, is what we expect RTPEngine to generate for us (as result)
  • SDP - is an end of given SDPs for this sub-test

NOTE: Every new test (new_call) can have many sub-tests included. So you can wrap into that something within one big scope, such as tests related to the SDP session version.

Generally said, if there is a new thing/feature being added into the RTPEngine, and this can potentially affect the behavior (even under some really specific circumstances), it’s important to cover this change with tests. For example: to emulate the call with a newly given flag and see that the expected results is given.

NOTE: make daemon-tests-main inside of /t can be used to run the tests manually.

Individually the unit tests can be executed normally, but the auto-daemon-tests need special instrumentation. Either use make daemon-tests-X from within t/, or if there is a need to execute the test script manually and separately from RTPengine:

  • Make sure tests-preload.so exists (make -C t tests-preload.so)
  • In one shell: LD_PRELOAD=t/tests-preload.so daemon/rtpengine --config-file=none -t -1 -i 203.0.113.1 ... (CLI options taken from respective test script)
  • In another shell: LD_PRELOAD=t/tests-preload.so RTPE_TEST_NO_LAUNCH=1 perl -Iperl t/auto-daemon-tests.pl

This even works with RTPengine running under a debugger or valgrind.

Another set of tests that is included is to run all of the make check tests under libasan. The top-level make asan-check target exists for that purpose and requires an initially clean source directory to execute properly.


Debug memory leaks

A subject, which requires a special mentioning here is — catching of memory leaks.

NOTE: There is a nice and elaborate video in this regard: https://www.youtube.com/watch?v=vVbWOKpIjjo
On that video you can see what needs to be done for a proper memory leaks debug, step by step.
Visually it can be more clear to go ahead with all of this stuff.

Almost each time, when a new feature is being introduced, it is covered with automated tests in the repository. The project itself (internally) is covered with ASAN tests, in order to spot memory leaks in time. So that, in case a new feature introduced a bad way of memory management, it would likely be noticed.

But, in order to make sure, that there are indeed no memory leaks introduced (by new feature / bug fix etc.), it’s possible to manually run the tests and see, if the binary during this launch consumed more memory, than it freed after all.

Valgrind

For that to work, there is a possibility to use the valgrind.
It’s required to install that before to go further.\

First, the binary must be compiled:

make

It will be stored in the daemon/ folder. Alternatively the package binary can be used, so indeed one doesn’t particularly need to compile the binary.

After the compilation is finished, there is a need to also compile the tests-preload.so, the sources of it are in the t/ folder:

cd t/
make tests-preload.so

And then, the command to launch the RTPEngine via the valgrind is as following:

LD_PRELOAD=../t/tests-preload.so G_SLICE=always-malloc valgrind --leak-check=full ./rtpengine --config-file=none -t -1 -i 203.0.113.1 -i 2001:db8:4321::1 -n 2223 -c 12345 -f -L 7 -E -u 2222 --silence-detect=1

The options to start it, are just copied from the auto-daemon-tests.pl.
Launch of the binary is done from the daemon/ folder in this case.

Important thing is to always point the G_SLICE=always-malloc environment variable, this is because the project uses heavily the Glib (GSlice allocator) and valgrind doesn’t know how to deal with this. So that, by using this environment variable, Glib is told to use the system alloc for memory allocation and valgrind will be able to track this memory.

Other than that, the valgrind option --leak-check=full is also quite important thing to have, since it tells where exactly the memory leak is.

At this point the RTPEngine is up and running. It’s time to launch, in a separate terminal, tests themselves (those tests, which were prepared to cover a new feature, or just common tests, if there is no new feature and it was a bug fix).

For that to work, the option telling that RTPEngine is already running, must be given:

RTPE_TEST_NO_LAUNCH=1 LD_PRELOAD=./tests-preload.so perl -I../perl auto-daemon-tests.pl

This has been launched from the t/ folder. And the ‘RTPE_TEST_NO_LAUNCH=1’ tells the auto tests that the RTPEngine is already running and there is no need to launch another one.

NOTE: Alternatively it’s possible to run tests with any other way sending commands to the RTPEngine, to let it do some work in the concerned scope.

After tests are finished, it’s time to collect the report from the valgrind.
Ctrl+C the terminal, where the binary has been launched before, and if there are no issues, the report must look something like that:

==978252== HEAP SUMMARY:
==978252==     in use at exit: 58,918 bytes in 335 blocks
==978252==   total heap usage: 23,833 allocs, 23,498 frees, 3,749,442 bytes allocated
==978252== 
==978252== LEAK SUMMARY:
==978252==    definitely lost: 0 bytes in 0 blocks
==978252==    indirectly lost: 0 bytes in 0 blocks
==978252==      possibly lost: 0 bytes in 0 blocks
==978252==    still reachable: 56,902 bytes in 314 blocks
==978252==         suppressed: 0 bytes in 0 blocks
==978252== Reachable blocks (those to which a pointer was found) are not shown.
==978252== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==978252== 
==978252== Use --track-origins=yes to see where uninitialised values come from
==978252== For lists of detected and suppressed errors, rerun with: -s
==978252== ERROR SUMMARY: 9 errors from 1 contexts (suppressed: 0 from 0)

NOTE: The downside of using valgrind is that the processing takes perceptibly more time. In other words, it works slower as without it. But since it’s not meant for a production usage, it’s quite alright (for example, for memory leak tests).

NOTE: Another important thing to remember, when running under the valgrind, is to make sure that the limits for the opened file descriptors is large enough. Unfortunately the valgrind on its own doesn’t take care of that, so one has to point it explicitly with: ulimit -n . Usually the value of 30000 is enough, so: ulimit -n 30000

Running one particular test under the valgrind.
In case there is a demand to run only one of those auto-daemon-tests under the valgrind, it’s possible to do so, just by editing the auto-daemon-tests.pl (or any other targeted test file indeed).

This detailed way of running one test, gives a list of advantages:

  • there is the RTPEngine log file in the end (usually in the /tmp folder)
  • less time waiting, till test is finished
  • for debugging memory issues, one can exactly see amount of bytes left not freed for this particular test (so no other tests will contribute here with not freed memory). Hence simpler to find the place triggering this memory leak.

For that to work, one just needs to copy-past that particular test and place it at the top of the auto-daemon-tests.pl. Since this file is the main auto tests file, it will run first, and therefore the interest is to stop the test process as fast as possible, and run only this particular test. That is why this test will be copy-pasted here, at the top.

Then, after the definition of the test, it’s just required to place this row:

done_testing;NGCP::Rtpengine::AutoTest::terminate('f00');exit;

Which will tell the auto tests to stop running and generate a log file in the end. Furthermore, in a such way of running, it’s even possible to get the coredump. The folder for storing that will be selected according to defaults of the environment, where the RTPEngine was run. For example, in the Ubuntu 22.04, by default coredumps are being stored in the: /var/lib/apport/coredump

Address Sanitizer

If the performance penalty introduced by the valgrind is not acceptable, it’s possible to use alternatively the Address Sanitizer (lib ASAN).

To work with that, it’s required to do a special compilation. So that it’s not possible to work with a package binary (some particular flags will be needed to be set during the compilation).

Apart of that, it can be a bit tricky to run it, depending on the distribution one uses. Older distributions will likely not be a working scenario for that, that is because the relatively recent GCC version/Clang is required.

There is a list of different things, which can be done with help of lib ASAN, hence the compilation will be different depending on needs.

It’s worth mentioning, that build environment must be clean, it’s a must.
So make sure to clean it beforehand:

git clean -fxd

There is one particular thing to be mentioned, it’s a make target dedicated for the asan checks (see Makefile in the project’s root folder):

asan-check:
        DO_ASAN_FLAGS=1 $(MAKE) check

This target is meant to run the build in tests. But in the end, this can be just used as an example how to create a build with the libasan included, for that one just needs to use the ‘DO_ASAN_FLAGS=1’ while doing the compilation:

DO_ASAN_FLAGS=1 make

which will give the binary with the libasan included.
And now, after the compilation is finished, it possible to use this binary as usually, so without a need for a valgrind.

NOTE: Remember, the approach with the valgrind and the one with the libasan are mutually exclusive.

Now there are a few run-time flags, which need to be exported to the environment before to proceed:

export ASAN_OPTIONS=verify_asan_link_order=0
export UBSAN_OPTIONS=print_stacktrace=1
export G_SLICE=always-malloc

And now it’s time to run the binary again, but this time without the valgrind, like so:

LD_PRELOAD=../t/tests-preload.so ./rtpengine --config-file=none -t -1 -i 203.0.113.1 -i 2001:db8:4321::1 -n 2223 -c 12345 -f -L 7 -E -u 2222 --silence-detect=1 

Of course the debug with the Address Sanitizer is not that comprehensive, as the run with the valgrind (it doesn’t detect as many thing as valgrind does). But it’s definitely faster in terms of processing. Now after running certain amount of tests and terminating the run of the binary with Ctrl+C, you can have a report telling whether or not some of the allocated bytes remained not freed.

Now getting back to the asan-check build target, let’s run this:

git clean -fxd
make asan-check

It will run a compilation with the libasan included and then just runs all the built-in unit tests using the gotten binary. And, if the memory leak issues can be captured by one of those unit tests, it will be reported as NOK test.

To remind again, the libasan will not tell the exact place in the code, which contributes with the memory not being freed. And if a particular test reproduces this issue, then it’s possible just to re-run that very test under the valgrind to get a detailed information on that. For that to work, just get back to the previous section to see how this can be done.


Glossary

  • tag - is a monologue related tag (value of which is taken from either From or To tag)
  • monologue - is a call_monologue = a half of dialogue