Indigresso Wiki

Open Source Stuff for DASH7

User Tools

Site Tools


opentag:kernel:main

OpenTag Kernel

OpenTag has a real-time kernel that dispatches tasks when events demand them. OpenTag can use various kernels as long as they can fit into the interface defined in /otlib/system.h, but most of the time, and for optimal performance, OpenTag projects use one of the native kernels.

Currently Supported Kernels

At the time of writing, OpenTag supports Cortex-M devices and MSP430X2 devices (MSP430X2 is the modern core used in F5, F6, and CC430 devices). Cores like PIC and AVR would be grouped with MSP430, should OpenTag get ported to these. As an editorial note: MSP430X2 is superior to prior MSP430 cores, PIC, or AVR8. I'm not going to port to any of those cores myself because I think you should be using MSP430X2 instead.

Name Dependencies Platform Support Notes
GULP Bare Metal MSP430X2 Global interrupt, cooperative multitasking
Big GULP Bare Metal MSP430X2 GULP + methods for multithreading
HICCULP Bare Metal Cortex-M Big GULP + optimized for Cortex-M or similar
  • GULP:Global [interrupt], Ultra Low Power”
  • Big GULP: GULP with thread context switching
  • HICCULP:Hardware Integrated Context Controller, Ultra Low Power”

What the Kernel Does

The OpenTag Kernel is a task scheduler, task switcher, and some related system calls. OpenTag follows an exokernel design model, so the kernel is quite minimal, and the system calls that do exist are really more like library functions (also, tasks have full access to hardware and interrupts). The kernel also is responsible for putting the system into low power modes whenever possible.

Kernel Scheduler

The Kernel requires a main loop to clock all of the task timers uniformly and manage task priorities. There is a special Session module for DLL communication processes, but otherwise all tasks have a single, associated timer.

The OpenTag kernel uses asynchronous scheduling with these tasks' timers, by setting the next value of each task's timer based on the next time the task needs to run. In between the stop of a task and its next start, the scheduler will use the time to run other tasks or put the system into low power modes (i.e. sleep). The scheduler does not run on a synchronous period, sometimes known as a “System Tick.” It only runs when one of the task timers expires or when some code manually calls the scheduler (via sys_preempt()).

Tasks

The Kernel runs tasks. Tasks may be cooperative, in which case they exist in a single context and must run to completion, or they can be threads. Threads support classic pre-emptive multitasking and thread programming models, but only Big GULP and HICCULP kernels can support threads.

All tasks in OpenTag given several parameters:

  • Event number: a state for the task
  • Cursor: a secondary state for the task, often used for stream/queue management
  • Reserve: the maximum time (or estimated time) the task will run before stopping/pausing.
  • Latency: the maximum allowable scheduling latency. 0 blocks lower-priority tasks.
  • Next: the next time the event should run.

Some of these are important to the task, but not generally to the scheduler or other parts of the kernel: Event number and cursor. The event number should be 0 if the task is disabled or non-zero if active (or paused), that is the only rule. The Reserve and Latency attributes, however, are very important to the scheduler. The scheduler will make sure that high priority tasks are not blocked by lower priority tasks based on the comparative values of these attributes. With threads, which are naturally pre-emptive, these attributes simply enable more efficient operation.

Cooperative Tasking

The kernel can support cooperative multitasking. Cooperative tasks are usually simple functions that must run periodically, such as the DASH7 sleep scan configurator. Additionally, there is a class of cooperative tasks that act as managers to device drivers, called Exotasks.

Kernel Tasks in OpenTag

An OpenTag system has a few built-in tasks that are often called “kernel tasks,” but technically speaking they are just high-priorty cooperative tasks.

Kernel Task Priority Task Activation
Hold Scan Timer 3 Prepare Channel Scan Called when Hold Timer expires
Sleep Scan Timer 4 Prepare Channel Scan Called when Sleep Timer expires
Beacon Timer 5 Prepare Beacon Called when Beacon Timer expires
External Task Run User App Code Called when Radio is OFF and kernel is idle

Exotasking

Exotasks include a connection to an interrupt to provide a direct source of pre-emption, and therefore a method of running code asynchronously outside the jurisdiction of the kernel scheduler. Writing an Exotask is difficult, and it requires implementing a state machine for the kernel task element as well as a set of callbacks that the driver (ISR code) must use to integrate itself with the kernel task that manages it. The DLL task is a sophisticated example of an Exotask. The MPipe task is a simpler example.

Exotask Priority Task Activation
DLL Task 1 DASH7 DLL & Radio Driver Manager Task timer (Session Timer) or Radio IRQ
MPipe Task 2 MPipe DLL & NDEF Driver Manager Task timer or MPipe IRQ

Pre-emptive Threading

Big GULP and HICCULP kernels can support fully pre-emptive multithreading. All kernels can support Exotasks, which are pre-emptive but not threaded. Threads provide a programming model that is much easier than the Exotask model is, at the cost of higher RAM requirement. In essence, a thread is programmed just like a normal main-loop, except you get one for each process. The scheduler produces the illusion that they are running in parallel.

Other Features & Attributes

Session Management

Session Module Main Article
The DASH7 Session Layer defines a sort of mini kernel that utilizes an ordered, stack/queue hybrid. It is like a queue because it is ordered, so sessions (scheduled DASH7 dialogs) will be activated in the order that they are scheduled. However, it is like a stack because ad-hoc sessions – sessions put onto the stack with orders to activate immediately – will take priority over sessions that might be scheduled to activate during the time that the new, ad-hoc session is now using. Sessions that timeout without being serviced are always flushed. Additionally, the actual implementation of the Session Layer is more stack-like than queue-ilke (it is a stack with insertion sort). All types of tasks will often spawn sessions into the stack, as this is the way they can transform a computational decision, or process, into a DASH7 dialog.

Task Timing

Tasks in OpenTag need a common timebase (the tick), and as little jitter/slop as possible. Less slop means the re-synchronization can be done less often in a synchronous system. In networks without synchronization, this doesn't matter quite as much, and timing can afford to be a little bit sloppy.

Timer Sharing

There are many ways to implement task timing, but the simplest and, generally, best way is to use a single timer resource to clock multiple time values in memory (one for each task). In fact, I can't think of any kernel that would do it another way. The kernel should also do as much as it can to clock the time spent in the kernel itself.

The Tick

In OpenTag and DASH7, the operative unit of time is the unit “tick.” 1 tick = 1/1024 seconds (approximately 0.977ms). This timebase can be achieved easily via a 32768 watch oscillator. Alternatively, a 46875 division of 48MHz can also generate 1024Hz. In USB MCUs like the STM32F1, 48MHz is a convenient/required value for the system clock. Common serial clock crystals (e.g. 3.6864 MHz) also can cleanly generate 1024 Hz. Of course, the 32768Hz watch oscillator is a nicest way to get the 1 tick timer.

Sub Sampling

One nice option for making OpenTag timing even more precise is to oversample the kernel timer. That is, to clock at a higher rate than 1024 Hz, such as 32768 Hz. This way, the amount of time spent inside the kernel itself can be clocked with greater precision. All present OpenTag Kernel implementations utilizing free-running hardware timers, so worst-case real-time kernel slop only depends on the crystal tolerance.

Panic/Error Management

The kernel should contain some facility for managing system errors and generating “kernel panics.” In certain implementations, a kernel panic might be extremely simple, just shutting off and blinking a death-LED, or it other implementations it might be more sophisticated. In any case, it needs to exist.

Interfacing with the Kernel

The OpenTag Kernel touches everything. It interfaces with the Radio subsystem, the Platform subsystem, and the OpenTag library. Therefore, it is hardware-dependent, but by design it is hardware-dependent in a hardware-independent way. If this is confusing, read-on.

Resources the Kernel Needs

The kernel has some basic hardware and software requirements. Both MSP430 and Cortex-M architectures meet these hardware requirements, and OpenTag implements the software requirements.

  1. A timer resource that can generate a 1024 Hz frequency, or some higher multiple of 1024 Hz.
  2. An interrupt controller that can call the kernel scheduler when this timer expires
  3. A method for running the kernel scheduler in an un-interruptible context.
  4. A method for delaying interrupts during the process of entry-into-sleep.
  5. A radio driver that is able to, at the very least, send a pre-emption to the kernel when frames are finished receiving and frames are finished transmitting.
  6. DASH7-like session layer
  7. DASH7-like filesystem (Veelite or similar)

Resource Interfaces


Platform

Universal Platform Header: otlib/OT_platform.h
The implementation of the platform functions must match the prototypes in OTlib/OT_platform.h. In particular, platform_ot_run(), platform_ot_preempt(), platform_ot_pause(), platform_flush_gptim(), platform_prand_u8(), platform_prand_u16(), and platform_memcpy() are used in virtually all possible implementations of OpenTag. The rest of the functions in the Platform should still be implemented, as they can cause the kernel to crash if there are needed and not available.

Radio

Universal Platform Header: otlib/radio.h
The DLL Exotask has a software interface with the Radio module, but the Radio module is asynchronous and interrupt-driven such that callbacks to the DLL Exotask often include kernel pre-emptions. The Radio module utilizes all functions described as radio_…() or rm2_…(). radio_ functions and data elements are generic and atomic, and system calls may reference them. rm2_… functions and data elements are to be used only by the DLL Exotask. They are often state-based.

System

Universal System Header: otlib/system.h
The “System” Interface defines the kernel interface available to upper layers. Once upon a time, during the early development of OpenTag, JP decided that the core features of OpenTag would best be implemented as an expert system. In the end, it now resembles more of a traditional kernel, and decisions about how to access session dialogs are much less complicated than initially thought. Nonetheless, some nomenclature remains. OpenTag does not implement a large number of system calls, but all of them – as well as all other kernel intrinsics – are functions with the prefix sys_.

Some device drivers may in certain places call some sys_…() functions. It is best practice for drivers to use a callback to an Exotask, which in turn issues a system call, but JP endorses any direct implementation that is markedly simpler/faster/leaner than a corresponding indirect implementation. Be thoughtful when producing Apps or Library Extensions that use system calls – different hardware platforms may behave differently.

Upper Layer Communications

DASH7 Data Link Layer: otlib/m2_dll.h
DASH7 Network Layer: otlib/m2_network.h
DASH7 Transport Layer: otlib/m2_transport.h
Kernel tasks that need to process (RX) or generate (TX) DASH7 frames need to call the network layer. The Data Link Layer (DLL) will handle these interfaces, and DLL is invoked as a task by the kernel scheduler. Users apps that need to post DASH7 messages directly should use OTAPI, which can create sessions that DLL will uptake.

Filesystem

Veelite FS Header: otlib/veelite.h
Many kernel tasks draw data streams from files stored in the Veelite filesystem. In periodic monitoring applications, it is often sufficient for user apps simply to post data to one or more files, as OpenTag DLL automates periodic beaconing and listening.

The System C-API

C API main article
The system module / kernel often implements some functions declared in OTlib/OTAPI.h or OTlib/OTAPI_c.h. Mostly this is out of convenience, so that these functions can use kernel task subroutines to save code space. Anyway, the OTAPI system calls are not fully part of the kernel. They are more like user-callable task aliases, as they affect the session layer directly and the kernel indirectly (via the session layer).

opentag/kernel/main.txt · Last modified: 2013/08/13 18:38 by jpnorair