About My Master Thesis

I wanted to write about my master thesis for some time. Mostly because it was written in Turkish which makes it inaccessible to few people that could be interested in it at all. I couldn’t decide where to start. So, I though maybe a q&a format will be easier to write. Here we go.

Oh, before I start here is my thesis in Turkish. It has pictures. And here is a paper (in English) I’ve written on the communication stack if you don’t have much time.

What was it about?

I’ve implemented a multi purpose wireless communication module for use in a medical tracking device using 802.15.6 protocol.

Why did you do this?

I wasn’t alone. This device was developed as part of a bigger project (funded by Tubitak) in KTU Electrical Eng. Dep. A remote patient tracking service. From scratch. We implemented many aspects of such a system. From sensors to the server infrastructure and signal processing. It covered a lot of area. Maybe too much.

My responsibility was to create a wireless module as a platform to implement patient monitoring sensors. And I got a thesis out of it.

What is 802.15.6?

This is a relatively new but obscure, probably dead on standardization, wireless communication standard by IEEE that is targeted at wearable sensors and medical implants. It’s called “Wireless Body Area Network” standard. It is most comparable to Bluetooth and ZigBee (802.15.4). You could say it was developed to solve problems with these 2 technologies. Its main aim is to be way more reliable than existing technologies and make it possible to implement cheap and trustworthy wireless medical equipments and patient tracking devices.

Why 802.15.6, what’s wrong with Bluetooth or ZigBee?

These protocols are still looked suspiciously when arrival of data is a must, especially in streaming cases. This lead us to look into alternatives. 802.15.6 was the perfect candidate to base a wireless communication protocol implementation for body worn sensor devices. It’s right there in the name. But the truth is we never had a chance (resource and time wise) to experiment or show strengths of the 802.15.6.

At this point I can only say: flexibility. But how do you leverage this flexibility is what matters.

What does the standard actually define?

It defines the Wireless MAC and PHY layers. I’ve only worked on implementing the MAC though. Implementing a PHY for battery powered application is only possible at IC level. So we decided to use a 802.15.4 transceiver for the PHY layer. That’s why I say “based on/using 802.15.6”.

Which 802.15.4 transceiver did you use?

We used Atmel AT86RF233 transceiver. We chose this one for 2 reasons. It was possible to use this as a simple transceiver only, no MAC features. It’s operating frequency range included 2360-2400MHz band which is allocated for medical use cases in USA, and maybe in other countries in the future. We also had some antenna work for this band but I wasn’t involved in it.

What else was in the hardware?

Not much if we don’t count the obvious sensor circuitry. As main processor we used ST STM32F407 microcontroller. RF circuitry was nonexistent because we used a module for AT86RF233 which included even the antenna on board. There was a voltage regulator and charging IC for lithium battery. And some LEDs of course. There wasn’t any sensor ICs on the communication board which connected to actual sensor circuitry via a pin header. That was because we implemented different measurement equipment which shared the same communication module.

Why STM32F407?

I already had some experience with this MCU. Tutorials and hello world stuff.. Running at 168MHz with 192KB RAM, 1MB Flash, featuring peripherals such as Ethernet, DCMI camera interface etc, it is actually overkill for a project of this type. But I wanted to make sure I had adequate firepower at hand. I didn’t know what I was getting into. My plan was to make it work first then optimize. Also this being a multi device protocol, I decided to use an RTOS. A first for me. I knew this would bring extra overhead.

Why ChibiOs?

ChibiOs is a real time operating system. It also comes with a nicely designed HAL API. It supports various architectures, mostly ARM based ones. There were 2 reasons that drawed me to Chibios:

1. Tickless operation mode
2. Very well designed and comprehensive HAL for STM32 MCUs

“Tickless” operation mode, meant that our code would run swiftly without unnecessary interruptions from the OS. In regular (ticky?) operation mode, OS needs to periodically stops everything to do its “thing”. In a tickless operation mode, OS only takes matters into its hand when it needs to. That means its more efficient, quick and even more predictable. It is somewhat similar to “cooperative” multi tasking.

Best part, ChibiOs API and HAL is so superbly designed, you don’t feel like you are using an operating system at all. It’s simple and intuitive yet powerful. It’s one of these libraries that you can use for quick prototyping tasks with ease. For example see this SPI receive routine that uses DMA and puts your thread automatically to sleep.

spiReceive(&SPID1, size, data);

Maybe I’m making too much out of it, but whole library is like this. Documentation could be a little better though, and I wish there were more tutorials. But since it’s open source and so well written you can always open up the code an figure it out yourself!

Tell me more about 802.15.6?

802.15.6 is based on a relatively simple star network architecture. It supports 1-hop relays but that’s it. All devices, “nodes” in a network are connected to the “hub”. Hub is like the wi-fi router or ZigBee coordinator or your BlueTooth phone.

802.15.6 standard is divided in to 2 main parts. PHY and MAC. PHY, physical layer standard, defines the frequencies and modulation features that is used. MAC, medium access control, in short defines “who speaks when”. It defines the rules and methods that allow many devices to share the same frequency and establish a network with no (or as few as possible) collisions.

I will keep the PHY part relatively short. There are 3 different PHYs defined.

1. Narrow band
2. Ultra Wide Band
3. Human Body Communications

Narrow band uses your traditional GMSK and QPSK based modulation in various bands. Most importantly and widely, in 2.4GHz ISM and 2.36-2.4GHz MBAN (M stands for medical) frequencies. I won’t get into the UWB and HBC. But to be honest I suspect these will be the bands that might make 802.15.6 “a product”.

We wanted to implement narrow band PHY in our system but unfortunately there are no 802.15.6 PHY that is commercially available even today, after 5 years of its standardization. It looked promising when this project was drafted more than 3 years ago, but today not so much. So we decided to go with closest thing. An 802.15.4 based transceiver.

MAC on the other hand deserves its own section.

What it’s like the MAC layer of 802.15.6?

Well, just like its PHY, 802.15.6 defines various methods for its MAC. These could be summarized as 3 different access modes.

1. superframes with beacons
2. superframes with no beacons
3. no superframes

In first mode, time is divided into superframes. At the beginning of each superframe a “beacon” frame is sent by the hub. This frame carries important information about the established network and hub. Such as its ID, access mode, superframe structure and timing information. It’s role is important for nodes that wants to connect to the network and are already connected nodes as well. Rest of the superframe is divided into different phases. Each phase allows for a different style of communication. I will explain these phases later.

In second mode, there are still superframes but no beacon frames. Honestly, the purpose of this mode isn’t clear to me. Maybe I don’t remember it now.

In 3rd mode, there are no superframes at all. In this mode communication is mainly carried out by poll frames sent by the hub. For example to establish a new connection, a new node should catch a poll frame from the hub.

What’s the difference between these modes?

In short mode 1 is more suitable for TDMA (Time Division Multiple Access) style communication and mode 3 is more suitable for polling and contented allocation style communication.

But, mode 1 is very interesting because it isn’t just a TDMA based method. It also features contended allocation features. It’s quite flexible.

Tell me more about flexibility of 802.15.6 access mode 1?

For this I need to talk about “phases” of the superframe. A superframe can be divided in to these phases (in order).

1. EAP1 : Emergency Access Phase
2. RAP1 : Random Access Phase
3. MAP1 : Managed Access Phase
4. EAP2 : Emergency Access Phase
5. RAP2 : Random Access Phase
6. MAP2 : Managed Access Phase
7. CAP : Contended Access Phase

Any of these phases are optional and can be of various sizes. By the way, even smaller then the phases, superframes are divided in to allocation slots. This is a relatively (comparable to duration of a small data frame) small time duration that makes sense in case of allocations. I can’t give a number off the top of my head but think something like ~500us is the smallest value. This duration is also adjustable and chosen by the hub. Length of the superframe and phases are given in terms of number of allocation slots. For example we could design a superframe like this:

Allocation slot size: 500us
Length of EAP1: 5 slots
Length of RAP1: 10 slots
Length of MAP1: 30 slots
Length of RAP2: 5 slots
Length of MAP2: 30 slots
Length of superframe (total): 80 slots = 40ms

As you can see we decided not to have an EAP to or CAP phase in our superframe.

Tell me more about phases?

“Emergency Access Phase”, as the name clearly implies is specifically left for emergency data frames. It is supposed to be used by devices that needs to transfer vital medical information as quickly as possible. It uses contended allocation mode.

“Random Access Phase”, can be used by connected or unconnected nodes. In this node contended allocation methods are employed. Basically nodes race each other to talk to the hub. And they occasionally (randomly) take voluntary (but calculated) breaks to allow other nodes to talk. If they detect a collision, they wait longer to reduce and control the number of collisions. Because -most of the time- collisions means wasted time. Time means precious bandwidth. Random access phase is somewhat easier to implement. Maybe harder. I can’t decide at the moment. It has a self balancing nature. But on the other hand it has potential to be less power efficient compared to other modes.

“Managed Access Phase”, is strictly managed by -you guessed it- the hub. It’s divided and allocated to the nodes as allocation slots upon their requests. When an allocation is requested (and accepted) it’s direction is also determined. It can be uplink, downlink or bi-link. In case of an “uplink” allocation, a node has right to transfer its frame to the hub as it sees fits. No other node has right to speak at that moment. It’s free of collisions you might say. In downlink, it is similar but flow of information is reversed. It’s from hub to a specific node. In bi-link mode, frames can be sent either way. Due it’s ambiguity, in this type of allocations hub has to issue polling frames for node to be able to send its frames. As you can see this type of access is what gives 802.15.6 TDMA features.

“Contention Access Phase” is very similar to RAP. Unfortunately I couldn’t ever figure out its difference. There is one thing though; beginning of the CAP is marked with transfer of a B2 frame. This is like a second beacon frame. It carries information about hub co-operation. This led me to think that CAP is also expected to be used by other hubs (or nodes of other BANs).

What access mode did you use?

We decided to use access mode 1. Mainly because of its TDMA like features. Our aim was to implement a reliable and efficient communication channel to be used by streaming nodes which were constantly transmitting data. To make it more efficient we had to get rid of conflicts. System was designed to be used in a *free* channel anyway. So we only had to make sure our nodes didn’t conflict with each other.

What kind of superframe structure did you use then?

To keep things simple at the beginning we designed a superframe structure that consists of a RAP and a MAP. Just RAP1 and MAP1. No EAP, or other fancy stuff. Once you implement these two, implementing others is a means of superframe structure optimization. I believe this is an area that should be throughly researched. And maybe there are ways to handle this in an adaptive manner. Unfortunately due to time constraints we could never experiment with superframe structure.

What else does 802.15.6 has?

MAC layer isn’t just about timing. 15.6 also defines the security of the communication. This was another thing we couldn’t work on. I intentionally stayed away from it because I didn’t have any experience with secure communication and encryption in general. But I knew that it’s quite easy to implement a flawed security layer that doesn’t actually provide any security. So I decided not to touch it until I had enough time to do it properly.

Another unique feature of 15.6, is the “hub co-operation” features. This allows a node to share it’s frequency with another hub. There are various ways of doing that. But clearest and most obvious method was superframe sharing. In this mode a hub allows another hub to use its superframes for its own purposes. Superframes are utilized alternately by each hub.

Another interesting feature is the acknowledgments. Each frame has a bit field in its header that marks if this frame requires an acknowledgment. Some frames never does. Such as the beacon frame. On the other hand management frames always require acknowledgment. There are different types of acknowledgements.

1. I-Ack : Immediate Acknowledgment
2. B-Ack : Block Acknowledgment
3. G-Ack : Group Acknowledgment

I-Ack should be obvious. When a frame requires I-Ack, it’s acknowledgment frame is sent immediately after reception. B-Ack is sent after a block of frames sent by the same node. Each frame is marked with B-Ack request. Receiver sends the acknowledgment when the last frame is received. If there are any missing frames these are noted in the B-Ack frame itself. This allows a more efficient usage of the channel bandwidth and even battery. When you have a relatively clean channel and most if not all frames are arriving peacefully it might make more sense to use group acknowledgment. This can be used for frames from nodes to hub. When frames are marked with G-Ack request, hub doesn’t send individual acknowledgment frames, it doesn’t even send them after a block of frames. But instead it relays the acknowledgment to the nodes in the B2 frame. All at once. That’s why its called a “group” acknowledgment.

Before you ask, we only used I-Ack in our work. B-Ack should have been definitely utilized though.

Another feature is the priorities. 7 (i think) priority levels are defined in order by the standard. I won’t list them all but included were medical data, video data, and voice data. Each data frame has a priority marked in its MAC header. Priority level is used during contended allocation. When a node has to “back out” after a frame collision, how long is should back out is effected by transmitted frame’s priority level. This ensures higher priority frames have better chance of passing through. MAP allocations also have priorities. Behavior for this case isn’t very well defined by the standard. But I would assume that a well behaving hub should cancel allocations (or resize them, yes that’s possible) in case a higher priority allocation is requested and there aren’t any free allocations slots left.

How did you implement the firmware?

Our code had a quite modular structure. I wanted to depict it as layers, but it just doesn’t sound right that way. In summary we had a PHY code to talk to the transceiver. MAC code that implemented the 802.15.6 MAC. Something we called MTL that simplified packet creation and parsing and in general interfacing sensors to the host device. It’s a whole thing of its own that I will describe later. Separate modules for each sensor we implemented. Such as a module for reading data from ADS1294 which is a glorified ADC for ECG applications. Or a motion sensor, battery level sensor etc. Then we had a relatively simple “main” for each device we implemented, hub and sensor nodes.

Tell me about PHY first?

This was relatively simple because it was just an interface for the actual PHY chip (AT86RF233). Chip was controlled via SPI. PHY layer had functions such as phy_transmit and phy_receive that would transmit a given frame to the transceiver and receive frames from it. AT86RF233 is a relatively simple TRX to use. Trickiest part of this code was to control the timing. Interrupts and a dedicated timer was utilized obviously.

To be continued

SchLibView KiCad Symbol Library Viewer – My First D Project

SchLibView is my latest GUI project. Its a simple viewer application for KiCad shematic symbol library files. It  can open and list symbols from multiple files at once. Its relatively easy to use. You can just drag&drop files. It doesn’t have much in terms of features at  the moment. And I don’t have any plans for it. This was mostly a practice project for the new language I’m learning.

It’s programmed in D language. For GUI I’ve used the GtkD library, this is a binding for Gtk+ (C) library for D. I really wanted to use Qt since that is my goto GUI solution. Unfortunately as of this moment there aren’t viable Qt bindings for D language. There are some solutions. But they are either obsolete (Qt4) or have an unpleasant/unstable API. GtkD on the other hand is very well put together from an API perspective. Although documentation was a bit lacking as it is with most bindings of the Gtk+ library it was still easy to discover and use. Also in my opinion, it fits well into the Ds way of programming, I mean it doesn’t feel out of place. When some years ago I tried to use gtkmm library (C++ bindings for Gtk+) I got frustrated because of heavy syntax required to use it. I want my code to look nice and be readable. That’s the main reason I got into the D programming language.

My experience with the D language itself was also very pleasant. It feels very powerful yet very easy to code. Standard library is awesome. You think “I wonder if this is gonna work?” and it works. I got genuinely surprised when I saw how easy it is to use meta-programming (*) features of the language, compared to mess that is C++ template programming. Although C++ significantly improved its game in recent years, I don’t think it will ever catch D’s simplicity. I’m very much looking forward to using this language in the future projects. It’s just what it is planned to be. A better C++.

(*) I haven’t used any meta-programming features in this project to be honest, but another project that I have been working on

STM32 PWM Output with Dead Time Using STM32Cube HAL Platform

In this post I describe steps to create a STM32 project for creating complementary PWM outputs with dead time in between. I use STM32CubeMx tool to configure and create a project that is almost ready to compile. I use STM32F4Discovery development kit which has STM32F407 MCU on it. This MCU contains many Timers with different capabilities. You can check out my blog post about STM32 timers for a table of features. We will use TIM1 which have the complementary output feature that we are looking for.

What is “dead time”

First of all, this is what we are trying to achive:

In this diagram we have 2 complementary signals. When one is ON, the other is OFF. They should never be open at the same time. Bu what is dead time, why is it required? Well, electronics aren’t perfect. However fast they are MOSFETs take some time to turn on and off. If two complementary MOSFETs are triggered at the same time, opening one and closing other, during the transition time there is a strong chance that both will be open at the same time. This may be catastrophic and cause immediate failure of the components. To ensure safe operation we put some duration between closing of one and opening other component. This time is usually called dead time in PWM context.

Creating a STM32F4 project using STM32CubeMx

Start STM32CubeMx

STM32CubeMx is a project configuration and generation tool released by ST to ease the life of developers. It creates project using STs HAL libraries. It can generate projects for various IDEs and even pure Makefiles.

Selecting MCU

Click “New Project” in the launch page.

Find and select STM32F407VG MCU from the list. Or you can switch to “Board Selector” tab and select “STM32F4Discovery” kit. In latter case, Cube does some more configuration such as naming pins and configuring them for respective components on the board. But this is not necessary for our use case at the moment.

In case you selected Discovery kit you should see this appear:

Configuring Pins

To enable the external 8 MHz crystal, from the component tree, RCC Component module, for “High Speed Clock” option select “Crystal/Ceramic Osc.” option.

From TIM1 component, change the “Channel1” option to “PWM Generation CH1 CH1N”. When you do that, pins PE8 and PE9 should change state on the pin display. But for you some other pins may be selected since there are alternative pins for TIM1 outputs (as its the case for many other functions of this chip). Just note which pins are selected. Or you can force it to select these pins by manual selection. A tip, when you Alt+LeftClick on a pin, alternatives for it are highlighted.

Configuring Clocks

Switch to the “Clock Configuration” tab. For this example, we will set our timer clock to 8MHz. STM32CubeMx has a smart solver. When you type a number to any of the editable boxes it tries to find appropriate configuration values to achieve that clock value. If it fails, it will tell you that value isn’t valid. First, from the “PLL source Mux” section, select the “HSE” option. This will connect our external crystal to PLL input. Don’t mind the errors yet.

TIM1 input clock is supplied from APB2 bus. But as you may have noticed timers clock input is twice the bus clock and they have dedicated box to set frequency. Click on the “APB2 Timer Clocks” box and enter “8”. Click any other box so that Cube updates the configuration. It should analyze and find a suitable configuration almost immediately. You can try to change system clock etc. yourself. For reference you should have something like this at the end:

Configuring Timer module

Switch to the “Configuration” tab. From the “Control” box, click the “TIM1” button. Below window should appear.

If you want to understand what all these options mean it’s a good idea to read reference manual. At the moment I will stick to the bare minimum.

First option you should set is the “Prescaler”. This slows down the timer input clock so that longer durations are possible. Also it makes it easier to achieve a particular duration value since you can enter any number between 0-65535. This value “+ 1” will be used the divide the input clock. Our timer input clock is 8MHz, to divide it to 8 and obtain 1MHz timer clock we should set prescaler to 7.

To set PWM signal period, change the “Counter Period” option. Set it to 1000, to have a 1ms waveform.

To enable dead time insertion enter “200” in to the “Dead Time” option. Note that, this won’t doesn’t result in a 200µs dead time duration. Dead time setting and the actual duration of dead time will be explained later.

To set PWM signal duty cycle, set the “Pulse” option for the Channel 1. To have a 50% duty cycle enter 500. In the end you should have a configuration like this:

Generating the Project

From the Project menu, select the “Generate Project”. Enter project name and location. Select the toolchain/IDE you want to use. Note that STM32CubeMx itself doesn’t provide any IDE or compiler. You have to setup the IDE yourself. When clicked “OK”, Cube may ask about some downloads. Select Ok, and wait until they complete. At the end, project should be created in selected location.

Code additions

Before compiling the project some additions should be made to actually enable the PWM outputs. Open the “main.c” file add these lines before the main “while” loop.


HAL_TIM_PWM_Start(&htim1, TIM_CHANNEL_1);
HAL_TIMEx_PWMN_Start(&htim1, TIM_CHANNEL_1); // turn on complementary channel


Be careful to add code in between “user code” comments. When you change the configuration and re-generate the project, Cube will make sure these lines are retained. Any other line that is not inside “user code” comments will be deleted.

That should be all. Compile and flash.

Duration of the Dead Time

Dead time setting is an 8 bit value. As a result it can be set between 0-255. This setting configures dead time duration in respect to the timer input clock. Note that this clock is before the “prescaler”. In our example, timer running frequency is 1 MHz but input frequency is 8 MHz. Dead time is determined from this clock. That means dead time duration is set in ( 1 / 8MHz = 125ns) steps. But that’s not all! I believe, to make it possible to set even greater durations, ST made things a little complicated. Here are the rules:

X as the the dead time setting, T actual dead time duration, t clock period (125ns in our case);

If X <= 127:

    T = X * t ⇛ in between 0 .. 127t

If X >= 128 and X <= 191:

    T = (64 + (X[5..0])) * 2 * t ⇛ in between 128t .. 254t

If X >= 192 and X <= 223:

    T = (32 + (X[4..0])) * 8 * t ⇛ in between 256t .. 504t

If X >= 224:

    T = (32 + (X[4..0])) * 16 * t ⇛ in between 512t .. 1008t

Let’s see how we can calculate the required setting for a desired duration with an example. For example, lets say we want 75µs dead time duration and our timer clock is 8MHz which means 125ns steps (t). Dividing 75µs to 125ns we need 600t setting. This means the X should be in the region of X >= 224 (4th option). Extract the offset portion of the 600t duration, which is 32 * 16 * t = 512t. Result is 600t - 512t = 88t. Divide this to multiplier, 16. Resulting: 88t / 16 = 5,5t. As you can see, we arrived at a fractional number, which means we cannot obtain an exact 75µs duration. We have to settle for a little bit less or more. Lets pick 6. Add this to the “224” to have the final setting value of, X = 224 + 6 = 230. This should give us 76µs dead time duration.

I hope this explanation was clear. If you didn’t understand, please read the description of TIMx_BDTR register in the manual (DTG bits).

Debugging Bluetooth LE Device Using bluetoothctl Tool Under Linux

First of all don’t bother with hcitool and hciconfig tools. They are deprecated. Use bluetoothctl to connect and test your Bluetooth LE devices. It’s an interactive command line utility that provides a convenient interface for testing and probing your devices. Here is a short introduction on how to connect to a LE device that implements “Health Thermometer Service” over GATT profile.

First, start the bluetoothctl tool. It starts as an interactive session, so rest of the commands will be entered into its prompt which appears like this:


You can enter “help” command to see a list of usable commands.

Enter “scan on” command to start the device discovery. This should print something like this if it finds your device:

[CHG] Controller E8:2A:EA:29:3E:6F Discovering: yes
 [NEW] Device 00:61:61:15:8D:60 Thermometer Example

To connect to the device use the “connect” command like this:

connect 00:61:61:15:8D:60

By the way, you can use TAB completion to quickly enter IDs, addresses etc.

This should print a success message and list the device characteristics right after that. In my case I had this response, but it will change depending on your device. By the way I’m using an example project that came with my Silabs bluetooth development kit.

Attempting to connect to 00:61:61:15:8D:60
 [CHG] Device 00:61:61:15:8D:60 Connected: yes
 Connection successful
 [NEW] Primary Service#
 Generic Attribute Profile
 [NEW] Characteristic
 Service Changed
 [NEW] Descriptor
 Client Characteristic Configuration
 [NEW] Primary Service
 Device Information
 [NEW] Characteristic
 Manufacturer Name String
 [NEW] Primary Service
 Health Thermometer
 [NEW] Characteristic
 Temperature Measurement
 [NEW] Descriptor
 Client Characteristic Configuration
 [NEW] Primary Service
 Vendor specific
 [NEW] Characteristic
 Vendor specific
 [CHG] Device 00:61:61:15:8D:60 UUIDs: 00001800-0000-1000-8000-00805f9b34fb
 [CHG] Device 00:61:61:15:8D:60 UUIDs: 00001801-0000-1000-8000-00805f9b34fb
 [CHG] Device 00:61:61:15:8D:60 UUIDs: 00001809-0000-1000-8000-00805f9b34fb
 [CHG] Device 00:61:61:15:8D:60 UUIDs: 0000180a-0000-1000-8000-00805f9b34fb
 [CHG] Device 00:61:61:15:8D:60 UUIDs: 1d14d6ee-fd63-4fa1-bfa4-8f47b42119f0
 [CHG] Device 00:61:61:15:8D:60 ServicesResolved: yes
 [CHG] Device 00:61:61:15:8D:60 Appearance: 0x0300

To get the list of characteristics you can use the “list-attributes” command, which should print the same list as above:

list-attributes 00:61:61:15:8D:60

To read an attribute you first select it, with the -you guessed it- “select-attribute” command:

select-attribute /org/bluez/hci0/dev_00_61_61_15_8D_60/service000a/char000b

After that you can issue the “read” command, without any parameters.

To read a characteristic continuously (if characteristic supports it) , use the “notify” command:

notify on

This should periodically print the characteristic value sent from the device. To stop run:

notify off

And disconnect with the “disconnect” command. So easy.


Hide Distracting “Hot Network Questions” on StackExchange (StackOverflow etc) sites

I’ve created a greasemonkey script to hide rather distracting “hot network questions” section on stackexchange sites. You have no idea how much of my time is lost to that innocent looking list. It is possible that it damaged my productivity more than reddit did!

This script will add a “hide/show” button to turn it on, in case you want to be distracted.

Get it here.

Using ARM Cortex-M SysTick Timer for Simple Delays

ARM Cortex-M based microcontrollers has an included system tick timer as part of the core. It’s generally used as the tick timer of a RTOS. Hence the name “SysTick”. It’s also relatively simple and lacks advanced features.

  • Only counts down
  • 24 bit
  • Auto-Reload register
  • Dedicated interrupt
  • Calibration value for interrupt period
  • Can use system clock or an external clock (*)
  • No input capture, no output compare, nothing fancy

(*) Implementation of SysTick timer somewhat varies between manufacturers. So make sure to check the documentation. In STM32F4 this external clock is the processor clock divided by 8. You can find more information in STM32F4 Programming Manual PM0214.

Here are the registers used to configure systick timer.

  • CTRL : configure systick timer, select clock source, enable interrupt, check countdown hit
  • LOAD : auto reload register, after hitting 0, count (VAL register) is reset to this value
  • VAL : current value register
  • CALIB : calibration register, only relevant for interrupts

Below is a function that delays by a given number of ticks.

void delay_ticks(unsigned ticks)
    SysTick->LOAD = ticks;
    SysTick->VAL = 0;
    SysTick->CTRL = SysTick_CTRL_ENABLE_Msk;

    // COUNTFLAG is a bit that is set to 1 when counter reaches 0.
    // It's automatically cleared when read.
    while ((SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk) == 0);
    SysTick->CTRL = 0;

By default the timer is configured to use system clock divided by 8. With a system clock of 84MHz, timer input is 10,5MHz which corresponds to a 0,095µs~=0,1µs tick period. For example calling this function with ticks=105 would result in a 10µs delay.

Let’s write functions that accepts microseconds and milliseconds as parameters.

static inline void delay_us(unsigned us)
    delay_ticks((us * (STM32_SYSCLK / 8)) / 1000000);

static inline void delay_ms(unsigned ms)
    delay_ticks((ms * (STM32_SYSCLK / 8)) / 1000);

Since these are very simple one liners, it makes sense to put them in a header file as inline functions so that compiler can optimize them, including the multiplication and divisions. Note that STM32_SYSCLK is a define for main system clock frequency, in my case 84000000. Its naming can change depending on the platform/libraries you are using.

Embedding Python in C#

Disclaimer: I don’t know C#. Take any line of code you see below with a grain of salt.

I have developed a USB based device. Now I have to provide a driver for it to the client (actually another developer team). Device is actually based on USB UART but we have a slightly complex protocol running over it, so I had to develop a library for interfacing. I did create one in Python. It wasn’t terribly hard, thanks to an awesome python library.  But the client team work in C#. I couldn’t find a similar library for C#, and we are short on time. I know idea of embedding a binary parser/serializer written in Python into a C# library sounds awful but this is a prototype anyway so…

This is my story of embedding Python into a C# application, using Mono. I got confirmation from a friend that this also works on Visual Studio based projects.

First thing first, we are not embedding the regular Python implementation called CPython into C#. Instead we will use another Python implementation called IronPython. This python implementation is identical to the original one from a language perspective. But its designed to be integrated into the .NET platform. It can make use of .net libraries but there are also disadvantages. For example you can’t use a python library if it makes use of C libraries. Also there is only 2.7 version. If your python code makes use of Python 3 only modules or modules that have C library dependencies you better start looking alternatives to them.

Everything you need is included in the IronPython release package. Download it from here: http://ironpython.net/download/ and extract/install to your preferred location.

Create a new C# project. For this example I’m creating a console based project. Now add IronPython DLLs to your project. You can find these in the IronPython installation folder.

  • IronPython.dll
  • Microsoft.Scripting.dll

You will also need to add Microsoft.CSharp DLL to you project as well. In MonoDevelop this can be done from “References” window, “All” tab. I’m guessing its similar in Visual Studio (maybe not even required).

And here is a simple test code running python:

using System;
using IronPython.Hosting;
using IronPython.Runtime;
using IronPython;
using Microsoft.Scripting.Hosting;

namespace blog
    class MainClass
        public static void Main (string[] args)
            ScriptEngine engine = Python.CreateEngine ();

            var result = engine.Execute ("2+2");

            Console.WriteLine (result);

This should be fairly simple to understand. Using Python.CreateEngine we are creating an instance of Python interpreter to run our code. And using Execute() method we are running a piece of python code and assigning its result into a C# variable.

Now lets do something a little more useful. We will run a script, and access variables from inside this script. For this we will need to use a ScriptScope.

using System;
using IronPython.Hosting;
using IronPython.Runtime;
using IronPython;
using Microsoft.Scripting.Hosting;

namespace blog
    class MainClass
        const string program = @"
a = 3
b = 4
a = b*2
        public static void Main (string[] args)
            ScriptEngine engine = Python.CreateEngine ();

            // create a ScriptSource to encapsulate our program and a scope to run it
            ScriptSource source = engine.CreateScriptSourceFromString (program);
            ScriptScope scope = engine.CreateScope ();

            // Execute the script in 'scope'
            source.Execute (scope);

            // access the variables from the 'scope'
            var varA = scope.GetVariable("a");
            var varB = scope.GetVariable("b");

            Console.WriteLine ("a: {0}, b: {1}", varA, varB);

Now lets try calling a python function from C#. It’s surprisingly easy.

using System;
using IronPython.Hosting;
using IronPython.Runtime;
using IronPython;
using Microsoft.Scripting.Hosting;

namespace blog
    class MainClass
        const string program = @"
def sum(a, b):
    return a+b
        public static void Main (string[] args)
            ScriptEngine engine = Python.CreateEngine ();

            ScriptSource source = engine.CreateScriptSourceFromString (program);
            ScriptScope scope = engine.CreateScope ();

            source.Execute (scope);

            // get the function from the python side (remember functions are
            // first class objects in python, you can refer to them like they 
            // are variables)
            var sumFunc = scope.GetVariable("sum");

            var result = sumFunc (2, 2);

            Console.WriteLine ("result: {0}", result);

Now lets import some python modules in our script. You will see that any module other than sys cannot be imported in python. That’s because IronPython instance that we embedded inside our application doesn’t know where its standard libraries are. But they are right next to DLLs inside the directory named “Lib/” you will say.. Still, it doesn’t know about them. You have two options; either move those libraries into your applications running directory (not adviced), or let know IronPython where they are.

Important note: before running below example add IronPython.Modules.dll to your project as well.

using System;
using IronPython.Hosting;
using IronPython.Runtime;
using IronPython;
using Microsoft.Scripting.Hosting;

namespace blog
    class MainClass
        const string program = @"
import os
print('running in %s' % os.getcwd())
        public static void Main (string[] args)
            ScriptEngine engine = Python.CreateEngine ();
            var paths = engine.GetSearchPaths ();
            paths.Add("/home/heyyo/Apps/IronPython-"); // change this path according to your IronPython installation
            engine.SetSearchPaths (paths);

            ScriptSource source = engine.CreateScriptSourceFromString (program);
            ScriptScope scope = engine.CreateScope ();

            source.Execute (scope);

If you need other modules for your script you should add their paths too using paths.Add().

That’s all for now. I plan to update this post as I discover more stuff in my adventure into the C# land!

Don’t use SessionAsync

If you are trying to write a Cinnamon extension/applet and having this error when trying to parse some data that you have fetched via Soap.SessionAsync:

Failed to convert UTF-8 string to JS string: Invalid byte sequence in conversion input

Using Soap.Session instead of Soap.SessionAsync may solve your problem.

I’ve encountered this problem when working with StackExchange API which always returns data in compressed format regardless of request headers. It turns out Soap.SessionAsync doesn’t automatically decode data according to its encoding headers. You have to initiate decoding explicitly.

Or you could just use Soap.Session which handles decoding as you would expect.

I’ve wanted to write this because due to historic reasons all of the -already small number of- Javascript Soap library usage examples is written with SessionAsync class. For simple purposes though Session class can be used which is capable of handling both synced and asynced requests.