Raspberry Pi AD/DA Board library for Window 10 IoT Core

A fully functional C# library (Window 10 IoT Core) for the WaveShare “Raspberry Pi High-Precision AD/DA Expansion Board”

I got myself a Raspberry Pi High-Precision AD/DA Expansion Board to be used in a Windows IoT Core C# application.

The Raspberry Pi High-Precision AD/DA Expansion Board

The board has one 2-channel digital-to-analog converter chip (DAC8552) and a 8-channel analog-to-digital converter chip (ADS1256).

The two converter chips

All documentation provided by WaveShare was referring to Raspberry Pi running Linux and the source code examples was written in C, so I had to write my own library.

The first challenge was to try to understand how the Linux code actually communicated with the board, and it turned out to be quite a detective work.

The Linux examples needed a BCM2835 library to work, so I started with its source code to get an understanding of it. It took some time to wrap my head around it, but in the end it turned out not to be especially complicated.

Here are my findings…

The basics of the board

The two converter chips both communicate with the Raspberry Pi over the SPI bus which uses three pins of the Pi: data in, data out and a clock signal.

The Raspberry Pi GPIO pins, showing the SPI bus

Since the two chips share the same communication lines, somehow they need to know when the Pi wants to speak to the one or the other.

This is achieved by using two GPIO pins, one for each chip, controlled by the Raspberry Pi (who serves as the master of the communication). When the Pi is pulling the signal LOW on one of these pin means that the corresponding chip is selected. After communication the signal is set back to HIGH. (This is a common technique called chip select.)

The Raspberry Pi GPIO pins, showing the “chip select” pins

Now, simply put, by sending different commands over the SPI bus to the two chips, the Raspberry Pi is able to both set the voltage (between 0 and 5 V) on two different output terminals (using the DAC8552 chip) and read the voltage (between -5 and 5 V) on eight different input terminals (using the ADS1256 chip).

The analog input (purple) and output (blue) terminals

In the picture above the input and output terminals are marked. The top green plastic bar consists of 13 screw terminals on which you can connect to both the analog input and output signals.

The yellow block of pins to the left is designed to fit Waveshares different analog sensors.

The eight input pins are named AD0 to AD7 and the two output pins are named DAC0 and DAC1.

The basics of my code library

Even though the two chips are on the same board I chose to put the code to handle them in different classes. This is mainly because they are dealing with totally different things.

They have however a common wrapper-class I named AdDaBoard (implementing the public IAdDaBoard interface). This class owns the two chip-specific classes Ads1256 and Dac8552, named after the two chips. These two classes implements the public interfaces IAnalogInput and IAnalogOutput, respectively.

To get an instance of the AdDaBoard you’ll have to call GetAdDaBoard() on the static class AdDaBoardFactory instead of just creating a new
instance. The reason is that the chip-communication requires the .NET class SpiDevice
that must be instantiated asynchronously – and a .NET constructor (in this case for the AdDaBoard) cannot be asynchronous.

Sharing the SPI bus

I wanted a clear and foolproof handling of the SPI bus. Codewise, I wanted to:

  • Make sure only one of the two chips could use the SPI at any given time
  • Automatically controlling the output level of the two chip select pins

In the end I constructed a SpiComm class, managed by a SpiCommController. There is one SpiComm instance for each chip.

The SpiComm implements the public ISpiComm interface which has two versions of an Operate method. The Operate methods takes a lambda expression that temporarily gives access to the actual .NET SpiDevice class. One of the methods returns a value (of any type) and the other one doesn’t.

Calling Operate will first enter a lock statement, locking on a shared object for both the SpiComm classes. This ensures that the two SpiComm‘s cannot operate the SPI bus simultaneously. The second thing that happens is that the chip select pin is given a low output signal. Now the SpiDevice is handed over to the calling code, and when it returns the chip select pin is changed back to high and the lock is released.

Both the IAnalogInput and IAnalogOutput has the ISpiComm as a public property– this way the end user (you!) can gain access to the “raw” SPI bus exactly as the library code does. The reason is that my library is not fully complete; there are a number of features of the ADS1256 and DAC8552 chips that I left out.

The analog output converter

The analog output converter was the easiest chip to get to work. To specify one of the output voltage levels all it took was to send three bytes to the chip.

The first byte is a set of control bits, determining which of the two outputs to affect – and if the voltage value should only be stored in the chip’s internal buffer or actually going out on the pin.

The last two bytes holds the output voltage as a 16-bit number. A value of 0x0000 means the lowest possible output voltage (which is the same as GND; normally 0 Volt) and 0xFFFF means the highest possible (which is the same as VREF, normally 3.3 or 5 Volt).

The VREF voltage can be easily switched between 3.3 Volt and 5 Volt using a jumper on the board:

The VREF jumper position

Placing the jumper covering the top two pins (of the three marked above) connects 5 Volt to the VREF connection, and placing it covering the bottom two pins connects 3.3 Volt.

(The middle pin of the three is the VREF, and I assume you can connect it to other reference voltages.)

In my library code I chose to have two ways of specifying the output voltage; one taking the wanted voltage and the currently used VREF – and another taking the wanted normalized voltage (between 0.0 and 1.0). Both methods are called SetOutput but have different parameters.

There are also methods to be called to set both output values (SetOutputs). Doing this you ensure that the two outputs are changed at exactly the same time (if you would need that).

Take a look in the datasheet of the DAC8552 chip to see all the details. (Hint: they call the two output pins A and B.)

The analog input converter

The input converter chip ADS1256 was a bit more complicated. You can find the datasheet here.

First of all, it required one more GPIO pin called Data Ready (or DRDY in the datasheet).

The Raspberry Pi GPIO pins, showing the Data Ready pin used by the ADS1256 chip

This is a signal the chip basically uses to tell the Pi if it’s ready or not to accept new commands on the SPI bus. If the Pi reads a low level on the pin it means the chip is ready.

The behavior of the chip is controlled via a set of internal registers (see page 30 in the datasheet). In my library I make use of the first four (although there are eleven in total).

The registers can be controlled via my library using the properties of the IAnalogInput.

At startup I read the registers and convert the current settings to the class properties.

Changing any of the properties does not have an immediate effect. It’s not until a reading of any of the analog input pins they are written to the registers (and they are only written if they have changed since the last reading).

Nothing will happen if any of the properties be changed during a reading (perhaps by another thread); the ongoing reading will use the property settings as they were when the reading begun – the changed properties will affect the next reading.

One property is called AutoSelfCalibrate. This will make the chip re-calibrate before the next reading if any of the affected registers has changed since the last calibration. There is also a method called PerformSelfCalibration that will perform a calibration on demand. But I think the auto calibration feature is the best.

The Gain property is an enum that can be used to magnify a smaller reading. Using a gain of 1 allows the input be in the full range of -5 V to +5 V. A gain of 2 allows only half of that range – but with twice the resolution. A gain of 4 allows an input in the range of ±1.25 V and the highest gain (64) can only read an input between ±78.125 mV (see page 16 in the datasheet; it’s called PGA there, short for Programmable Gain Amplifier).

The DataRate is an interesting property. It specify how fast the chip should sample the input levels. It also determines how much filtering should be applied, if I understand the datasheet correctly. A very fast data rate means less filtering and vice versa. More filtering means a more exact value. The highest rate is 30,000 samples per second, but in reality you cannot squeeze these many readings out of the board – not on the Raspberry Pi running Windows 10 IoT Core, which is not a real-time operating system in this sense. This means, for instance, that the analog input is not very suitable for sampling sound. (I guess you could, but you would get a very lo-fi result.)

The effects of the data rate appears all over the datasheet. Search for “30,000” in the datasheet!

There is a “open/short sensor detection” on the chip, and it is controlled via the DetectCurrentSources property. You can read more about this on page 15 in the datasheet.

The final property is the UseInputBuffer which controls whether to use the embedded input buffer or not. Read more about it on page 15 in the datasheet. It is a “low-drift chopper-stabilized”, which sounds very cool, although I have no idea of what it means… 😛

Reading an input value can be done in two ways; either simply reading one of the eight pins (GetInput), or get a differential reading between two pins (GetInputDifference).

Either way you need to specify the vRef parameter, but that is just to scale the returned value to the right level. It doesn’t necessarily have to be the actual VREF voltage; it just sets the range of the returned value. If you for instance say vRef is 1.0 you will get a reading between -1.0 and 1.0.

Thread safety

The code should be completely thread safe. You may call any method or change any properties from parallel threads without having to do your own locking.

Running the demos

The board comes with a couple of components that simplifies playing with both the inputs and outputs.

The embedded testing gadgets on the board

The blue marking shows two LEDs that can be connected to the output signals. To rewire the output signals to them you must place a jumper marked with the green 1 and 2. (Jumper 1 is for the output pin DAC1 to LED B and jumper 2 is for output pin DAC0 to LED A.)

The big red marking is a potentiometer that may be connected to the input pin AD0 using the marked jumper 4. Turning the knob anticlockwise lowers the voltage on the first analog input pin and turning it clockwise raises it.

The smaller red marking shows a photo resistor that may be connected to the input pin AD1 using the marked jumper 3. Exposing it to light will affect the voltage to the second input pin.

To run my demo application RPi.AdDaBoard.Demo all the four jumpers should be connected (to make use of both the LEDs and both the input resistors).

In the constructor of the MainPage you will find a simple way of choosing which demo to run. There is one simple output demo, one simple input demo and one that combines both input and output.

The output demo makes the LED lights alternately go from completely off to full brightness and back repeatedly.

The input demo reads the voltage levels of the potentiometer knob and photo resistor and writes the values to the Visual Studio debug output console.

The input/output demo is a blend of the two; it takes a reading of the potentiometer knob and puts the value to the LEDs.

The demo code should be easy enough to understand how to use the library, but don’t hesitate to ask questions!

The Source Code

Can be found on GitHub under emmellsoft / RPi.AdDaBoard.

Connector Mapping


A copy of this blog post can be found among my projects on hackster.io.

Find Your WinIoT Devices!

Developing applications for the Windows 10 IoT Core on Raspberry Pi, you soon get familiar with the “Windows IoT Core Watcher” that installs on your development machine together with the ISO for the Raspberry Pi image:

Windows IoT Core Watcher

I was thinking that it would be nice to have this functionality in my own code, so I used Wireshark to try to find out the magic behind the scene.

It turned out that the Raspberry Pi (or rather the Windows 10 IoT Core) broadcasted a 150 bytes big UDP package (around) every fifth second, carrying the information presented by the watcher application.

This is the content of the byte array my device was sending (where the middle part of the MAC address bytes are blanked out with XX):

The 150 bytes my device was broadcasting

It wasn’t hard to realize that the bytes were a UTF-16 text string, which means that the package effectively contained 75 Unicode characters.

Since only ASCII characters (actually only English letters, regular digits and a couple of punctuation symbols) are present, every second byte is actually unused. (They will only be utilized if you manage to give your device a name of non-English characters.)

Decoding the bytes as UTF-16 characters you will get this:

The bytes decoded as 75 UTF-16 characters

Note that the empty cells with lighter backgrounds above contain binary zero and are therefore completely empty (i.e. not even space characters — a total blank, as you can tell from the byte array).

Anyway, I wrapped this into a library in C#, for simple integration in other projects.

As an example, here is the Main method of a regular Windows console application, that listens for devices found on the network:

How it works

Get an IWinIotCoreListener by calling the Create method of the static WinIotCoreListenerFactory. As long as you don’t dispose the listener given to you, it will continue to fire the OnDeviceInfoUpdated event. This event is fired each time a new device is found, an existing device changes a property or when a device stops broadcasting its data package. The UpdateStatus property of the event args tells you the kind of change (an enum saying Found, Updated or Lost). The DeviceInfo property of the event args holds all the properties received in the broadcast package: MachineName, IpAddress and the MAC address — both in string format (MacAddressString) and as a byte array (MacAddressBytes).

You can also — at any time — get the current list of devices from the DeviceInfos property of the listener interface.

Calling the Dispose method on the listener will make it stop receiving broadcasts and free all its resources.

Get the library

You can get the library by downloading this NuGet package, or, if you prefer, you can get the full source code from GitHub.

Enjoy! 🙂

Async singleton initialization

Still in these days you once in a while have use for a singleton object. But how do you achieve a thread-safe singleton that requires async initialization?

First, let’s start off with a good ol’ singleton class that can be constructed synchronously.

Synchronous Singleton

Assuming your singleton class needs some initialization data that it can construct itself (synchronously). To make it thread-safe, we rely on locking on a static read-only object.

In the version above the Singleton property is designed to work as quick as possible, only entering the lock if there is a chance that the class hasn’t yet been initialized. Within the lock, a method that creates the MySingleton object is called. This way it’s ensured that only one instance of the MySingleton is ever created.

Asynchronous Singleton

But what to do if the CreateSomeData method would work asynchronous, returning a Task<SomeData> instead, using the await keyword? Let’s modify the code in this way:

Making this change, we should play nice and propagate the async’ness to the CreateSingleton method:

That would naturally imply that the Singleton property should also be asynchronous, returning a Task<MySingleton> instead.

So, is that possibly?

First of all, it turns out that it’s not allowed to use the async keyword with a property. But this is no biggie, we could afford a method instead.

But more importantly, we are not allowed to use a lock around an await keyword. (This is actually a good thing, since the lock would really work against us here – so we should thank the C# team for not allowing this!)

So how do we do this then? As a starter, a naïve, non thread-safe solution would look like this:

Requesting the Singleton property would of course lead to a call to the CreateSingleton method every time, and would hardly be a singleton…

But the solution to make it both a singleton and thread-safe turns out to be ridiculously simple – we let the inner mechanisms of the Task class work for us!

So, how does a task work?

Let’s say you have an instance of a Task<T> and you await it once. Now the task is executed, and a value of T is produced and returned to you. Now what if you await the same task instance again? In this case the task just returns the previously produced value immediately in a completely synchronous manner.

And what if you await the same task instance simultaneously from multiple threads (where you would normally get a race condition)? Well, the first one (since there will be one that gets there first) will execute the task code while the others will wait for the result to be processed. Then when the result has been produced, all the await’s will finish (virtually) simultaneously and return the value.

So, a Task is thread-safe, and it looks as if we could use this power in our advantage here!

In the end, all we need is to replace the old Singleton property with this one:

Or, rewritten in a more classic C# way (without making use of the fancy new C# 6 “read-only properties” and “property initializers”):

So the dead simple solution is to assign the async method to a read-only (static) Task field (or read-only property in C# 6). This gives you both a singleton and thread-safety for free!

Lazy Synchronous Singleton

Being empowered with the super-simple async version, you might wonder if there really is no similar way to achieve this in the synchronous version?

Of course there is!

Revisiting the synchronous first version of the MySingleton in this blog post, you may replace the SyncObj and _singleton fields and the Singleton property with these two lines:

Or, if you prefer the pre-C# 6 code:

Wow, magic! No need for locks or anything! So how does this work?

Well, the Lazy<T> can be seen as the synchronous counterpart to the Task<T> in some aspects. At least it has the same singleton and thread-safety properties, and keeps its results for latecomers.

As you can see, the constructor of the Lazy<MySingleton> is given the means of producing a MySingleton (i.e. the CreateSingleton method) – not an instance of the type directly. Then, when someone is requesting the Singleton property, the Value of the Lazy-class is accessed. And in the same manner as the Task, the first one reaching for the Value will make the actual call to the CreateSingleton method. Any other thread asking at the same time will simply be hanging in the Value getter, and continues once the CreateSingleton method is done and the Value is produced. Further on, any consecutive calls to the Value getter will return immediately with the same instance of the MySingleton. So, again, the Lazy gives us both a singleton and thread-safety.

And there was much rejoicing!

Exporting a certificate without its private key and password-protect the output? Beware, there is a serious trap!

So you have an instance of an X509Certificate2 (or X509Certificate) that you want to export as a byte array – and you want to exclude the private key – and encrypt the output using a password.

You have found the Export method of the certificate class which takes one of the X509ContentType enum values and an optional password. The MSDN help informs that you must choose between X509ContentType.Cert, X509ContentType.SerializedCert and X509ContentType.Pkcs12 for the export to work. You also find out (by experimenting or googling) that exporting using X509ContentType.Cert produces a serialized certificate without the private key – just what you want! Hooray!

Now you specify a password and think you’re OK.

Fail. You are not. The password is actually ignored.

If you try this:

The resulting byte arrays a, b and c will have the exact same content, even though you specified different passwords! (And it behaves the same if you use the SecureString class.)

Personally I would expect the Export method to throw an exception when specifying X509ContentType.Cert together with a password (other than null). That would give me, as a developer, a clear sign that I am trying to use an unsupported parameter combination which gives me a chance to try to figure out a work-around. As it is now I am lead to believe that the output content is in fact encrypted.

It is also possible to recreate the certificate again from the byte array giving any password:

Both certX and certY above will be correctly reconstructed.

Here is a simple solution you can use to export a certificate without its private key and encrypt the exported bytes:

Now calling this method, specifying two different passwords and asking not to include the private key…

…generates two byte arrays d and e that are different. Further on, if you try to recreate it you must specify the correct password.

The certZ will be correctly reconstructed, but the second try (with wrong password) will throw a CryptographicException with the message “The specified network password is not correct.

Get the Raspberry Pi’s “Sense HAT” working on Windows IoT

This project was about to make the Raspberry PiSense HAT” working on the Windows IoT platform.

I had some initial help from Graham Chow who’ve done a great job to get the LEDs and joystick up and running.

Then I contacted Richard Barnett who is the father of the RTIMULib used by the official Sense HAT installation. This library was written in C++ and managed the sensor readings. Richard “got onboard” and did the raw porting of his library (so far the senors used by the Sense HAT) into C#.

I did some refactoring of the library and have included the display and keyboard drivers into a complete C# project solution which you will find here (including the full source code). There is also a NuGet package to download for Visual Studio 2015 called Emmellsoft.IoT.RPi.SenseHat.

The solution contains a demo project that shows some of what you can do with the Sense HAT.

Here’s a video showing some of the demos:

(Note: It’s really difficult to take pictures or film the Sense HAT due to the relatively slow update of the LED displays. I had to use a really old camera with lousy frame rate to get an acceptable picture; otherwise the display would be blinking and flickering. Our eyes are luckily “bad enough” to find the LED update frequency acceptable.)

Update: Now there is support for sprites. Thanks Johan Vinet for your tiny Mario! Video below:


Best practices of the Dictionary class

It’s important to know how the standard classes behave so that your code gets optimized, not performing unnecessary checks or making unnecessary calls.

Here are a number of issues I’ve seen time after time:

Fetching a value from the dictionary only if it exists

In the first version (bad) there are two dictionary lookups performed. It’s like walk in to a public library asking the librarian whether a certain book is in – and when the librarian comes back and says “yes, it’s in”, you ask again saying “Good, could you please get it for me?”. In real life it’s pretty obvious that this is a silly behavior, but in the computer world it’s not that apparent.

In the second version (good), only one lookup is performed:

Accessing all the keys and values

I’ve seen many real-life cases where access to both the key and value is needed in a foreach-loop, and the iteration is performed over the Keys-properties while the value is accessed by a lookup:
If you want both the key and the value simultaneously it’s better to iterate over the dictionary as KeyValuePairs:
(Obviously; if you want only the keys you should use the Keys-property, and if you want only the values use the Values-property)

Overwriting a value

There is really no need to remove a key before assigning a new value to it:

Removing a value

Once again, the following implementation leads to two lookups:
The Remove-method is actually very kind. It doesn’t crash if you try to remove something that is not there but returns a bool telling the result of the operation:

When may my call fail?

The call… …fails when
Never (*)
The value will be added if it’s not already in the dictionary, and it will be replaced if it does.
When the entry “key” does not exist.
When the entry “key” already exists.
Never (*)
Never (*)
(And the call returns false if the entry “key” does not exist.)
(*) Well, you get an ArgumentNullException if the key is null…

Fixing the assembly hopping between WP7 and WP8

If you want to develop an Windows Phone app that can run on both WP7 and WP8 you can simply submit one built for Windows Phone 7. But if you want to take advantage of some of the new feature of Windows Phone 8 you will need to submit two versions of it (one for 7 and one for 8). This means that you will have to maintain two copies of the project file — but you obviously don’t want two copies of each source code file.

This can be solved by putting the code in a Portable Class Library and then reference it by the two projects. There are however some limitations using a Portable Class Library, and you might find yourself stuck between a rock and a hard [coded] place trying to use it.

Each time I was facing the dual project problem, I ended up with using linked source code files instead, i.e. making one of the projects the “main” project, and then simply linking all the source code files into the other project (i.e. “Add As Link” rather than “Add” when adding “Existing Item…”). An annoying problem with this solution is however that each time you rename a source file you will need to remove and add the linked file again. Aaarrgghh! :[ But for me this actually was less painful than using the Portable Class Library.

Anyway, this way you will only have to maintain one copy of the code files that are identical on both platforms. For the ones that differs a bit, you either use compiler switches (e.g. “#if WP7 [...] #endif“) — or totally different classes (i.e. non-linked code, if there are “too much” differences to make use of #if‘s).

But when it comes to XAML you are in more trouble. For instance, if you are sharing the same implementation of a page in both WP7 and WP8, you will notice that the home of the Pivot (and Panorama) has changed between the two platforms. Ouch.

In WP7 your page declaration looks like this:

but your WP8 projects want this line:

As you can see above, the assembly name has changed.

This makes you yearn for the following (illegal) construction:

But unfortunately compiler switches are not allowed in XAML… So the above will NOT compile. 🙁

There is however a pretty simple solution to this. Create the following two classes:

This file can be linked across the two platforms intact without any need of #if‘s (since it wasn’t the namespace that was changed, “just” the assembly).

Now you simply use the MyPivot wrapper classes (instead of the Pivot directly) in your XAML, and you’re good to go:


Running a “Hello World” written in C# on a Raspberry Pi for Linux n00bs like me

A while ago I found myself in the possession of a Raspberry Pi.

O the joy of a new toy! But what to do with it?

Well, being a C# developer of course I would like to be able to install and run my own applications on it — and I wanted to do that remotely from my regular laptop.

The biggest problem was that I have no Linux skills what so ever. I’ve been living a comfortable (well, hm… you know…) Windows life ever since the days of the Amiga and Commodore 64. But how hard could it be?

The following are the steps I had to go through to make it work. It’s a hodgepodge of wisdoms found all over the internet. So; disclaimer: It worked for me, but I can’t promise it will work for you!


Here we go…


Step 1. Formatting the SD card

To do this properly you’ll need to use the SD Formatter 3.1 application.

Install it and launch it having your SD card inserted. Simply press the Format button. It’s enough to just do a quick format.

Important note!

If you are repeating this step later on, your SD card might appear to have lost a few gigabytes, only showing something like 56MB! This is due to that Windows cannot see the Linux partitions, and the 56MB is just some left-overs that Linux didn’t claim.

You have at least a couple of options to reclaim the full space in Windows:

  1. Find yourself a camera or any other non-windows device and try to format it in there. I first tried my age-old compact camera, but it just said that the card was bad. Then I tried my somewhat newer video camera, and it formatted it nicely and I got back my gigabytes. Phew…
  2. Use the dreaded (?) Flashnul (blog post) tool. I haven’t tried this one since my first option worked for me. Be careful…


Step 2. Installing a Linux distribution on the freshly formatted SD card

I used the Raspbian “wheezy” image. (I was somehow drawn to its tag-line “If you’re just starting out, this is the image we recommend you use”.)

To write the image to the card you’ll need the Image Writer for Windows application. There is no installation required; just download and launch the Win32DiskImager.exe. Launch it, click the blue little folder-button and point out the image file. Then press the Write button to start writing. This will take a few moments, depending on the size and speed of your SD card.


Step 3. First launch

After preparing the SD-card it’s time for the first launch.

Insert the SD card into the Pi and plug in an ethernet cable (with internet access), a USB keyboard and a monitor/tv (hdmi). Finally plug in the micro USB power adapter to boot it up. You will see a whole lot of text burst out on the screen. You do not need to read it all. 🙂

After a while a classic ASCII-artish gray popup appears, entitled Raspi-config.

Here you can for instance set your keyboard layout. When you’re done, use the tab key to move to the Finish button, then press enter to access the console.

This popup is shown only the very first launch time. The following times you start the Pi you will need to enter a user name and password to access the console. The default user name is “pi” and the password is “raspberry”.


Step 4. Optional: Setting the keyboard locale

If you didn’t change the keyboard layout in that first popup (which only will show up upon the first launch), you can do it using the following command:

sudo nano /etc/default/keyboard

Where ‘sudo’ means ‘run this command as super user’, ‘nano’ is a text editor and ‘/etc/default/keyboard’ is the file to edit.

Now the text editor appears, and you may change the ‘XKBLAYOUT’ to whatever you like. I chose ‘se’ for Sweden.

Save with Ctrl + O (!) and exit with Ctrl + X.

Reboot. This can be done by pulling the power cord or by issuing the command

sudo reboot


Step 5. Set a fixed IP-address

Since you want to remotely connect to your Raspberry Pi you might want to give it a fixed IP-address. This step is of course optional.

If you want to know the current network settings on your Pi, execute the following command:


(No, that is not a typo. It should be an ‘f’, not a ‘p’…) You will see the current IP-address under the section ‘eth0’, on the line starting with ‘inet’. (The MAC address is found after the word ‘HWaddr’.)

Edit the network settings by executing:

sudo nano /etc/network/interfaces

When the editor appears, replace the line

iface eth0 inet dhcp

with these lines (the #-mark means that the line is commented out.)

#iface eth0 inet dhcp
iface eth0 inet static

Obviously you will need to fill in your preferred numbers here…

After saving the file you’ll need to reboot. The best thing is of course to check your new settings by executing the ifconfig-command again, after rebooting.


Step 6. Ensure the SSH service is started at boot time

This is necessary for connecting remotely. It was however already active on the image I used, but I’ll keep this step here anyway.

The command to execute is

sudo mv /boot/boot_enable_ssh.rc /boot/boot.rc

which means to rename/move the file/directory specified. I got the following error message:

mv: cannot stat `/boot/boot_enable_ssh.rc’: No such file or directory

which I interpreted that the SSH service was already active for me.


Step 7. Connect remotely from your favorite Windows PC

Download PuTTY.

There’s no installation here, just an exe-file. Launch it and fill in the IP-address to your Pi (and the port, which defaults to 22) and connect with the Open button.

You will get a warning of a potential security breach the first time you connect, because your PuTTY instance (obviously) does not recognize your Pi.

I guess it is quite harmless to press the Yes button here.

Now the Pi console window greets you by asking for your username (pi) and password (raspberry). You’re in!


Step 8. FTP

In addition to execute command on your Pi you will also need to transfer files to and from it. Use your favorite FTP client, for instance FileZilla. Simply connect to the correct IP address, port 22, user “pi” and password “raspberry”.


Step 9. Install the C# compiler and .net exe-executer

To be able to compile and execute C#-code on the Raspberry Pi, you will need to install Mono.

This turns out to be pretty simple. Just execute the following command to install the dmcs (4.0 mscorlib) C# compiler:

sudo apt-get install mono-dmcs

This will take a while to process, the files requested will be downloaded from the internet and installed by some magic Linux fairy. Please note that in the beginning you will get a “Do you want to continue”-question. After saying Y + enter you might want to get a quick cup of coffee (or a beer since this is most probably done off-business hours). It will take a couple of minutes.


Step 10. Compile and run your program.

Now, I wanted the full monty and even compile the C# code on the raspberry. So, in order to make it as simple as possible, I put all the code into one file (HelloPi.cs):


Using the FTP client, transfer your ‘HelloPi.cs’ source file to a suitable folder. Compile the source file with the dmcs-command:

dmcs HelloPi.cs

Since it’s not a very large application, it will be compiled pretty much instantly. Launch it with the mono-command:

mono MyApp.exe

Joy to the world, my app has launched! Laughing

Important note!

You might as well do the compilation on your PC (which is Much. More. Convenient.) and just transfer the exe-file to the Pi instead.


Some useful links


Good luck! Wink


Announcing the Diversify app for Øredev 2012

A collegue of mine, Fredrik Mörk, noticed the announcement of the Øredev 2012 mobile app contest, and soon a team was formed that happily started hacking away (hosting the code in a Mercurial repository on BitBucket, and also using Trello to have some sort of control of who was what, and what needed to be done). The team consisted of Micael Carlstedt, Niclas Carlstedt, Fredrik Mörk, Markus Wallén, Sebastian Johansson and myself.

The app is supposed to function as a companion before and during the conference. The main functionality is that it offers you to easily navigate and explore in the conference program, and build your own personalized conference program by “favoriting” sessions. It also features a twitter feed, listening to stuff related to the conference and the app. One focus of the design has been to allow you to explore. From almost every view, there is a way to move on and get information about related stuff. One example is when you are looking at a session, from this view you can in one touch navigate to

  • a page highlighting the room in which the session is on a map
  • a page showing all sessions in that room during the conference
  • a page showing details about the speaker (OK, this typically requires two interactions; scroll to the bottom of the page and then tap the speaker)
  • a page showing details about each other session running in other rooms at the same time
  • a page showing all sessions for a topic that this session also has

This may be the most extreme example, but it’s not unique in its concept.

One other thing that the app does is that it invites to sharing information about the conference. You can (again with very few interactions) share info about a session or speaker to social networks.

There is an update submitted to the market place which will add a couple of features (tell the app what days you will attend and it will filter the data throughout the app accordingly and, as a personal touch, the app authors’ suggestions for some sessions we find extra interesting). Go ahead and get the app from the Windows Phone Marketplace.


(Thank you Fredrik, for allowing me to steal your words from your blog postSmile)


Some screen shots (click for full size):
Splash screen My Øredev List of sessions Session details

Speaker details Twitter feed Tweet detail About us

Rendering the audio captured by a Windows Phone device

This small app displays the audio captured by the microphone of a Windows Phone device and displays it as a continous waveform on the screen using the XNA framework.

A slightly modified version of the app (changing color when touching the screen) can be found in the Windows Phone Marketplace.


In order to capture the audio on a Windows Phone device, you need an instance to the default microphone (Microphone.Default), decide how often you want samples using the BufferDuration-property and hook up the BufferReady-event. Then you control the capturing with the Start() and Stop() methods.

The microphone is giving you samples at a fixed rate of 16 000 Hz, i.e. 16 000 samples per second. There is a property SampleRate that will tell this value. This means that you won’t be able to capture audio of higher frequency than 8000 Hz (without distortion) according to the sampling theorem.

You are also limited when it comes to choose the value for the BufferDuration-property; it must be between 0.1 and 1 seconds (100 – 1000 ms) in 10ms-steps. This means that you must choose a value of 100, 110, 120, …, 990, 1000 milliseconds.

When the microphone event BufferReady is fired, you should call the microphone.GetData(myBuffer)-method, in order to copy the samples from the microphone’s internal buffer to a buffer that belongs to you. The recorded audio comes in the form of a byte-array, but since the samples are actually signed 16-bits integers (i.e. an integer in the range of -32’768 … 32’767), you will probably need to do some convertion before you can process them.

How this application works

The way this application works is keeping a fixed number of narrow images, here called “(image) slices”, arranged in a linked list. The images are rendered on the screen and smoothly moved from the right to the left. When the left-most slice has gone off the screen, it is moved to the far right (still outside the screen) in order to create the illusion of an unlimited number of images.

Each slice holds the rendered samples from the content of one microphone buffer. When the buffer is filled by the microphone mechanism, the rightmost slice (outside of the screen) is rendered with these new samples and started to be moved inwards the screen.

The speed of how fast the slices are moving across the screen is correlated to the duration of the buffer in such a way that the slices are moved a total of “one slice width” during the time the microphone is capturing the next buffer.

Since the buffer of captured audio is rendered as graphic on a texture as soon it is received, there is no reason to keep any old buffer data. Therefore the application only keeps one buffer in memory which is reused over and over.

A flag is set each time the microphone buffer is ready. Since the BufferReady event is fired on the main thread, there is no need for any lock-mechanism.

In the Update()-method of the XNA app, the flag is checked whether new data has arrived, and if so, the slice in line is drawn. In the Draw()-method, the slices are drawn on the screen and slightly moved as time goes by.

The complete Visual Studio solution file can be downloaded from here.

Here’s a description of the structure of the main “Game”-class.

Some constants:

Fields regarding the microphone and the captured data:

Choose a color that is almost transparent (the last of the four parameters; it’s the red, green, blue and alpha-component of the color). The reason is that many samples are drawn on top of each other, and keeping each individual sample almost see-through makes an interesting visual effect.

The drawing classes. The white pixel texture is doing all the drawing.

The size of each image slice.

There’s no need to keep a reference to the linked list itself; just the first and last link. These links keeps references to their neighbors. The currentImageSlice is the one to draw on the next time.

The speed of the slices moving across the screen.

In order to know how far the current samples should be moved, the application must keep track of when they appeared.

 The signal that tells the Update()-method that there is new data to handle.

 The density of samples per pixel.

Here’s the constructor. In it the graphics mode is set and the microphone is wired up and asked to start listening.

In the XNA’s LoadContent nothing is actually loaded since the app is not dependent on any predrawn images. The SpriteBatch is created, the white pixel texture is generated and the image slices are initialized (as black images).

The CreateSliceImages is calculating how many slices that are needed to cover the entire screen (plus two so there’s room for movement). In the end of the method the regular RenderSamples-method is called in order to initial all the images. Since there is no data yet (all samples are zero) it will generate black images.

The XNA’s UnloadContent is just cleaning up what the LoadContent created.

The event handler to the microphone’s BufferReady-event. It copies the data from the microphone buffer and raises the flag that new data has arrived.

The XNA’s Update method checks the phone’s Back-button to see if it’s time to quit. After that it checks the flag to see if new data has been recorded. If so, the new samples are rendered by calling the RenderSamles-method.

The XNA’s Draw-method takes care of drawing the rendered slices. It handles the two screen orientation modes; landscape and portrait, by scaling the images accordingly. If it is landscape mode the height of the images are squeezed and if it is portrait mode the width of the images are squeezed.

When all is setup, the method iterates through the images and render them one-by-one on the screen, adjusted a bit along the X-axis to make up for the time that has passed.

The RenderSamples is taking a RenderTarget2D as an argument, which is the texture to be drawn on. The routine iterates through the samples and render them one by one.