1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
|
nextepc from Osmocom point of view
==================================
:author: Harald Welte <laforge@gnumonks.org>
:copyright: 2019 by Harald Welte (License: CC-BY-SA)
:backend: slidy
:max-width: 45em
== nestepc intro
Please see presentation of nextepc developer Sukchan Lee at OsmoDevCon 2019
This talk is about a "behind the scenes" look at the nextepc codebase through the eyes of an Osmocom
developer.
Goal is to understand the high-level software architecture, and whether there is any chance for sharing
code/infrastructure between the two projects
== linked lists
* `lib/core/include/core_list.h`
* double-linked lists like our `linuxlist.h`
* not thread safe
* `list_insert_sorted()` has no libosmo* equivalent
** only user is timer.c, which appears not used?
=> looks very much compatible to libosmocore logging
== logging
* static/global number of _log targets_
** stdout, console, syslog, network, file
** each target has independent log level
* message types (defines formatting)
** RAW, TRACE, LOG, ASSERT
* non-RAW logging happens via snprintf to 8k sized stack buffer
=> looks very much compatible to libosmocore logging
== FSM
* nextepc FSM abstraction in `lib/core/src/fsm.c`
* `fsm_init()`, `fsm_dispatch()` and `fsm_final()` are only API functions
* states are expressed by switching to a different function pointer for the event handler function
* rather simplistic when compared to osmo_fsm
** no onenter/onleave
** no constraints on permitted events / state transitions
** no integrated logging
** no introspection (like VTY or CTRL)
** no concept of FSM classes / instances
** no FSM hierarchy
** no extensive logging by FSM infrastructure
== FSM usage
* The only point where many instances of FSMs are used in parallel is inside the MME context.c (EMM and ESM).
* There don't appear to be multiple threads in the EMM and ESM FSMs, AFAICT.
* overall surprisingly low number of FSMs
=> migration to `osmo_fsm` seems feasible
== Events
* `lib/core/src/event.c`
** asynchronous event delivery based on message queue
** `event_{create,delete,send,recv,timedrecv}()`
* timed events use dynamically-allocated timer to send event via queue at timer expiration
** `event_timer_create()`, `timer_create()`, `periodic_timer_create()`
* SGW and PGW combine event queue with FSM, where main thread receives events from queue to dispatch them to FSM
=> nice idea. libosmo* signals and FSM input events are entirely synchronous. This ensures that all related
data structures exist while the event is being processed. Having queued events would require very careful
code design and possibly refcounting for pretty much all objects :/
== packet buffers
* nextepc packet buffers in `lib/core/src/unix/pkbuf.c`
* _management_ structure is kept separate from _packet payload_
** similar to Linux `sk_buff`, where you can `skb_clone()` whcih just duplicates the `struct sk_buff` but not the actual packet data
* single `payload` pointer to distinguish headers from payload
* pools of different buffer sizes (127/256/512/1024/2048/8192)
** index into pool is used as TEID :/
* allocations are thread-safe
* buffers have reference count (to avoid deep copy?)
** only one user currently: S1AP paging message copying
== memory allocator
* `lib/core/src/unix/malloc.c`
* `core_{malloc,free,calloc,realloc}()`
* internally uses `pkbuf` as backing storage (not other way around)
== message queue
* `lib/core/include/core_msgq.h`
* `msgbq_{init,final,create,delete,send,recv,timedrecv}()`
* contains mutexes, condition variable
* blocking and non-blocking receive semantics
=> may be candidate for `osmo_it_msgq` which si currently WIP. Signaling happens via osmo-select compatible eventfd, not condition variable
== string utilities
* `core_strdup`, `core_strndup`
* dynamically allocate their result using `core_malloc`
== timers
* `lib/core/include/core_timer.h`
* maintains separate list of active and idle timers
** no rbtree, linear list iteration
* supports both one-shot and periodic timers (libosmocore only one-shot)
* six (!) arguments for timer expiration function call-back function
* doesn't seem to have any direct users, only indirectly via `lib/core/src/event.c`
== TLV
* `lib/core/include/core_tlv.h`
* hierarchical TLV parser using dynamically-allocated objects for each TLV
* can express repeated tags
* can express nested tags
* supports only a sub-set of the TLV types (TLV, TL16V, T16L16)
* used heavily throughout [generated] GTP
== freeDiameter
* C-language DIAMETER protocol library
* development seems mostly discontinued during past 5 years
** 0 commits during past 12 months
** 11 commits during past 24 months
* projects typically use "fork" of freeDiameter copied into their repo
* internally uses plenty of threads
* applications register callback functions whenever related DIAMETER message is received
** callbacks executed in context of whichever freeDiameter thread
== MME
* has the following threads:
** `sm_thread` / `sm_main()`
*** inbound event queue, dispatched into mme_sm FSM
** `net_thread` / `net_main()`
*** endless `sock_select_loop()`
** whatever threads freeDiameter creates
== SGW
* has the following threads:
** `sgw_thread` / `sgw_main()`
*** inbound event queue, dispatched into `sgw_sm` FSM
*** handles all of the SGW functionality (S1U to eNB, S11 to MME, S8 to PGW)
== PGW
* has the following threads:
** `pgw_thread` / `pgw_main()`
** inbound event queue, dispatched into pgw_sm FSM
** whatever threads freeDiameter creates
== HSS
* FIXME
* web UI using node.js
** usees tons of dependencies (npm nightmare)
** doesn't rely on older/distibution-packaged versions
** should IMHO be an optional part, not mandatory
== GTPv2C code generation
* use 3GPP TS 29.274 word documen
* convert .doc to .docx (Office 2007+)
* use python script to parse tables in .docx
* generate C source code for encode/decoder from python script
* https://github.com/acetcom/nextepc/blob/master/lib/gtp/support/gtp_tlv.py[lib/gtp/support/gtp_tlv.py]
=> I love it :)
* Same approach also used for generating NAS encoder/decoder
** https://github.com/acetcom/nextepc/blob/master/lib/nas/support/nas_message.py[/lib/nas/support/nas_message.py]
== S1-AP / ASN.1 PER / asn1c
== Tests
* C-language testsuite sending/receiving messages onvarious interfaces
* resembles the kind of tests we usually do in TTCN-3, but without TTCN-3
== Conclusions
* nextepc uses threads, but is not heavily multithreaded
** this actually would make libosmocore integration more feasible than originally expected
* nextepc seems a much more heavyweight heap user
** e.g. every event or every timer is dynamically allocated vs. osmocom 'struct embedding', heavy stack use and synchronous event delivery
* there are some interesting ideas in nextepc, such as queued event, and timers that generate them
* most difficult problems probably around refcounting of packet buffers
* GTPv2 code generation could be adopted for OsmoGGSN GTPv2 support
== Conclusions
* I would love to bring some of our powerful features to nextepc
** talloc with related ability for memory leak debugging
** osmo_fsm with all of its power
** VTY for state introspection and runtime config changes
== If I had a dream...
... I would
* osmo-ify nextepc MME (logging, FSMs, VTY)
* extend OsmoGGSN with GTPv2 support via the nextepc code generation approach
* implement a simple DIAMENTER->GSUP translator to use OsmoHLR with nextepc
** use the kernel-side GTP-U user plane to focus on control plane only in GGSN/G-GW
* keep S-GW as-is for now, but think about kernel user plane there, too.
* convert test suite TTCN-3
== EOF
End of File
|