When you cannot use Azure IoT Device Provisioning Service

Hi!

You may know already about the Azure IoT Device Provisioning service, if not, head to https://docs.microsoft.com/en-us/azure/iot-dps/ for more information.

The idea behind DPS is short and simple: Imagine, you’re the manufacturer of an IoT device and you want enable your device to “just work” once it’s delivered to your customer. So the customer unpacks the device, plugs in Ethernet and the device lights up, starts talking to the cloud, receives device configuration information. So far, so easy, you think, in production, let’s just provision an Azure IoT hub device connection string or a x.509 certificate and that’s all.

So why would you need DPS? Let’s say your device has been configured at the factory, but it’s been sitting on a shelf for years. And then it gets exported to a country where you never thought you would sell devices to, back in the days when you created your service. That’s when you decided that you would use a x.509 certificate and a single iot hub. But now, a couple of years later, you have 10 iot hubs in different geographies and the initial certificates you installed in the device have expired since you set their validity to 3 years. This is where DPS comes in. Your device can now go to the global DPS endpoint and ask: “is there a new configuration for me?”. DPS now looks through its database to find a matching configuration. If it does, it encrypts it in a way that only this particular device can decrypt the information and sends it back to the device. The device then decrypts the configuration information using its built-in hardware security module and in it, it finds the configuration for an iot hub. It then connects with the obtained credentials, receives additional configuration information such as OS and application update instructions and suddenly works. Your customer is delighted and you are too since you now have another device talking to your backend service.

So why wouldn’t you do this all the time? Well, there is one important prerequisite for using DPS: You need to add a device-individual public/private key pair to the device at production time and you need to record the public key in a secure way at this time (or install a certificate used for group registration to be precise). Doing so isn’t very hard (e.g. if your devices have a built-in TPM 2.0, you can just use the built-in “EK” of the TPM for this) but the HSM adds BOM cost and reading out the information adds time in the manufacturing process.

Now imagine, you have a very simple device such as the teXXmo IoT Button (http://www.iot-button.eu/) which does not have a HSM, but needs an Azure IoT Hub device connection string. You can now go back to the initial approach and provision every device with an individual connection string at production. But maybe you don’t want to give your manufacturer full access to your production IoT hub when provisioning devices, but you still need an automated way to generate these connection strings.

This is where my simple Quick Device Registration Service sample comes handy. (Or did somebody say Quick & Dirty Registration Service?) Instead of handing over the keys to the castle (i.e. the iot hub owner connection string) you install this service as an Azure function and provide your manufacturer with the access codes to this service and he can produce device-individual iot hub connection strings while producing devices. The sample client included in the solution calls the service with a given device serial number and gets back the iot hub connection string for this device. The service also checks that no serial number is used twice and that each serial number is valid. (In the sample, it just checks if a serial number is divisible by 7, but you can test in whatever way you can imagine, e.g. CRCs, min/max etc.) Once the manufacturer has obtained the device connection string, he can write it into the device. Once the device is connected to the Internet, it has all information necessary to talk to your Azure IoT Hub.

But there is another usage for this. Imagine you want to use IoT hub in a software-only product, let’s say a digital signage solution that is “just” an application that an end user can install on an existing PC. Or it’s a driver package that comes with your device, and the device is a PC peripheral that does not talk to the cloud directly. In both cases, there is no factory provisioning possible. And while there may be a device-individual identity (e.g. a device serial number) you may not have any space to store (let alone secure) a per-device secret. But you still want to use IoT hub since it’s such a neat way to get information from your device, send information from the cloud to the individual devices and manage your devices using the device twin.

So you add code to your application or driver stack that calls the registration service during installation. The code reads out the serial number information (or even asks the user to enter it?) and then calls the service to obtain the device-individual connection string. It then stores the string locally (let’s say in a configuration file on disk or in the Windows registry) and then uses it to connect to an IoT Hub.

But here, a word of warning is required: If you implement such a solution, the credentials used to call your registration service need to be in your client application or driver stack. So, if there is information, it can be found using reverse engineering. Still, this is better than leaving the IOT hub owner credentials in the client application or, even worse, leaving behind a signing key that would be able to create valid group registration certificates for DPS. Nevertheless, you should implement some protection against reverse engineering and monitor your registration service carefully to identify potential attackers that may have found the credentials and try to compromise your service.

If you need a secure solution, use DPS (and a hardware security module!) If you can’t, then have a look at https://github.com/holgerkenn/qdrs and see if it fits your need.

Hope this helps,

H.

 

 

 

Posted in IoT Security | Leave a comment

Mioty at DPK

Hi!

A couple of weeks ago, we had the Microsoft German partner conference (DPK) in Leipzig, where our IoT partners were showing their solutions. And one of the partners that presented their technology is Mioty.

So what is this? (roll video )

Mioty is a LPWAN technology that adresses shortcomings that we have seen with the existing technologies in this area. With Mioty, we have found something that works for scenarios where we need long range and lots of sensors. And for places where it will take a very long time for traditional telco infrastructure to build up coverage.

Like many other LPWAN technologies, Mioty operates in the license-free bands, depending on the geographic region it is deployed in it will use 868 or 915 Mhz. The good thing about these bands is that the penetration into buildings is much better than in the higher 2.4 Ghz bands. So we have seen examples where Mioty has been used in mines and worked in places where even conventional two-way radio would not work reliably. So that’s pretty good. It’s not intended to send around gigabytes of data, but it can process more than one million sensor messages per day over its full range of around 15km in free space and 5km in city areas.

The people behind Mioty are from the same research institute that brought you things like the digital radio standard DAB and, as a little side project thereof, “Adaptive Spectral Perceptual Entropy Coding”, an audio codec technology better known under its standardization name “MPEG-1 Audio Layer III” or the file extension used for encoded audio files. “mp3”. Mioty has been developed by the Fraunhofer Institute for Integrated Circuits (Fraunhofer IIS).

So about one year ago, I met one of the people behind this, Albert Behr from Behr Technologies at a Fraunhofer Event here in Munich. One of the event organizers grabbed me by the arm and almost pulled me across the room, saying “I have someone you have to meet.” We met, we talked for a while and I thought “That this is a really interesting technology, but I’d like to see it working first”. Having spent my time in academia, I was well aware of the different goals people in Academia and Industry have, knowing the “Publish or perish” situation all to well and understanding that moving from paper to product is often not rewarded in academia. But Fraunhofer, being a group of applied research institutes that have a track record of moving research results into products and standards, and IIS in particular being behind this added credibility to his claim and we decided to keep talking and see where this leads us. And so he introduced me to the leads of the Fraunhofer working groups behind the technology, Günter Rohmer and Michael Schlicht as well as Wolfgang Thieme who is driving the business development. We received a test kit, played around with it and worked with the Mioty development team to hook it up to Azure IoT. We have helped them build a solution that allows Mioty to gather data from all the sensors, send the data up to the cloud and make it available to any Azure cloud service, either in the form of a near real-time data stream or stored in a database.

Albert and Wolfgang have since approached more than 20 early adopters from various application areas with this technology and have received very positive feedback. And for me, it has been great to work with the Mioty team to push the limits of LPWAN and to enable a whole new set of customers and applications to link their sensors to Azure IoT.

I can’t wait to see where the Mioty team will take this.

H.

Posted in IoT Gateways | Tagged , , , | Leave a comment

Hilscher NetPi: Raspberry Pi 3 for industrial automation (Part 1)

Hi!

Over the last weeks, I’ve been working with a nice little device that is very useful for prototyping professional IoT.

One of the challenges in this area is talking to field bus systems. There are many field busses out there and all of them have their individual reason of existence, either coming from a specific group of hardware vendors (e.g. Profibus and Profinet coming from the Siemens PLC ecosystem) or being adopted in specific application domains (CAN in vehicles). But luckily, there are companies that have implemented hardware and software to talk to many of these busses and Hilscher  is one of those partners. They have implemented their own silicon in form of their NetX chip which is able to speak many of these protocols.

Now enter the NetPI. Hilscher has built an industrial gateway hardware that combines the standard Raspberry Pi 3 Broadcom SOC with a NetX chip. In addition, they took some ideas from the RPi 3 compute module and added an eMMC to replace the ever-failing SD-cards and added a standard 24V industrial grade power supply circuit.

On the software side, the NetPI runs a hardened Linux, Docker and a web-based UI. Via this UI, one can run Docker containers. See here for a list of images provided by Hilscher.

Getting the device up and running is rather straightforward:

Attach Ethernet and 24V (my device is drawing 150 mA, so about 3.6W). The NetPI will do dhcp on its Ethernet port and acquire an IP address. It will also register a hostname which is simply “NT” plus its Ethernet address, that’s all printed on its side. Now you can access your device via http://NTxxxxxxxxxxxx or via its IP address http://x.x.x.x that you can find in your router. It will redirect you to https:// and then you will see a certificate warning. Ignore the warning and connect anyway. (In Edge on Windows 10, you need to click on “Details”, then “go on to the website”. You can upload a certificate to the device, then this error goes away.) Then you will see this:

Here, you can configure the device via the “control panel” or manage Docker. The initial login information is printed on the side as well. (No big secret here: username “admin”, password “admin”, the device will force you to change the password immediately.) To make things work, you should first check if the clock is set right in the control panel, otherwise you will get a number of strange errors, but their root cause is that the NetPI does not accept any TLS certificates because it thinks they are outside their validity period. So click on control panel, (accept the certificate warnings again) then log in and head to system/time. Now add an NTP server of your preference (I’m using ptbtime1.ptb.de which is the official master clock in Germany) and press “save changes”.  Now the clock should update and under “status”, the display should read “Synchronized to time server …” If you don’t do this step, there is a high chance that you won’t be able to run any Docker images since the TLS-based download of the images will fail!

Now click on the Services/Service List in the menu, then select Docker. Start Docker and set the Docker service to autostart, then click “Apply”.

Now head back to the main portal and this time, click on Docker. The quickest way I found is simply to edit the URL and remove anything behind the host name. If you have successfully started Docker, then you will get to the Docker management interface when clicking on the item. If not, then head back to the Service menu and check if Docker is running (the icon next to Docker should be green.) Now you should see the portainer.io Docker management portal. To check if the internet connection is working, go to the “image” section and enter “hilschernetpi/netpi-raspbian:latest” in the Name field under “Pull Image” and click “Pull”. Now the image should show up in the image list below. Other useful images to pull are “hilschernetpi/netpi-nodered-fieldbus:latest” and “hilschernetpi/netpi-netx-programming-examples:latest”.

To get started, run the Node-RED fieldbus container on your device. The instructions are here. If you want to write code that interacts with the fieldbus directly, look here. This environment can also be used to run the Azure IoT SDK on the NetPi. I will write up more instructions on this in my next post.

Hope this helps,
H.

 

 

 

 

 

 

 

 

 

Posted in IoT Gateways | Tagged , , , | Leave a comment

Just returning from ISWC2017

Hi!

I’m on my way back from ISWC2017 (the International Symposium on Wearable Computing) and it’s been great! This year marks ISWC’s 20th anniversary. The web page of the first conference can now only be found on the wayback machine: ISWC ’97

This year, I was organizing the industry session at ISWC and Ubicomp and we had great talks of people from industry that don’t get to talk to the academic world that often. Including myself…

I’m planning to have their presentations up in a few days, either here or on a separate website.

For information on the conferences, go to the Ubicomp and ISWC websites.

H.

Posted in Science | Tagged , , , , , | Leave a comment

Raspberry Pi for sailboats

A friend of mine recently bought a sailboat. Now, before you think that this is going to be a bragging post with loads of pictures of people sipping champagne, sorry to disappoint you.

The boat is already about 10 years old and needs a refresh of most of its electronics equipment. And of course you would not want to trust your life to a maker project, so this is NOT about a do-it-yourself job replacing professional marine equipment with toy hardware. But then there are a couple of things where a PI can help. So I dug up an old PI B (v1, with the Arm11 core) from my basement, attached a 12V USB charger and we had a pi on a sailboat. The 12V power supply on the boat is quite stable, so I did not add any additional stabilization or UPS and automatic shutdown for the pi. It also does not consume a lot of power, so we just left it running. Since the old pi also had a FBAS output, we also connected it to the TV on the boat.

To access the pi when the TV is off, we added an ethernet cable that I plugged into my PC. I also added an USB wifi stick with an external antenna connection.

To hook the pi up to the internet, you can either use a (Wifi?) LTE router or just book your PI into an open wifi network that many marinas have. This is where the external antenna connection of the wsb wifi stick is very handy. The internet connection is useful for downloading software and later to upload images from the weather camera.

Now the hacker could be happy, but what about the sailor? So we started adding some maritime stuff.

Like most modern sailboats, this one has a NMEA bus connecting most of the navigation equipment. https://en.wikipedia.org/wiki/NMEA_0183 That’s actually a serial port, but using the differential signaling from RS422 and RS485, so one could use the PI’s serial port and a 75176 (or its equivalent MAX485) to have the Pi listen to the NMEA bus. We haven’t done this yet, that’s something for the winter months to come.

But then, one can also use the Pi as a poor man’s AIS receiver. https://en.wikipedia.org/wiki/Automatic_identification_system is a transmitter system that almost all commercial ships and many yachts carry. In its most simple form, it broadcasts the identification and GPS position of the ship to all surrounding vessels. For this, it uses two fixed frequencies and a simple AM modulation scheme. And using a simple SDR receiver, the pi is able to receive and decode these signals.

http://www.rtl-sdr.com/rtl-sdr-tutorial-cheap-ais-ship-tracking/ is a tutorial on how to do this, but basically, one needs a simple Realtek RTL2832U DVB-T receiver USB dongle (I used an NooElec NESDR Mini 2+ with TXCO http://www.nooelec.com/store/nesdr-mini-2-plus.html) and the AISDeco2 software from http://xdeco.org/. After plugging in the USB dongle, the current Raspbian loads the DVB-T standard kernel module. To use the dongle for SDR, unload this module again via “rmmod dvb_usb_rtl28xxu”. Then run the receiver software:

sudo ./aisdeco –gain 33.8 –freq 161975000 –freq 162025000 –net 30007

(If you have changed the access rights according to the tutorial, you don’t need the sudo. Depending on your antenna, you may need a different gain setting. And if you use a receiver without TXCO, you may need to calibrate your receiver with a frequency offset, that’s described in the tutorial above.)

  Now after a while, you should see log output like this:

2017-08-25 09:56:12.546  INFO     !AIVDM,1,1,,B,139cAvP0000SAUfNfbm15SfJ2<2@,0*1B 

(And here’s the bonus question: given the AIS info above, where am I writing this blog post?)

If everything works well, you can use OpenCPN https://opencpn.org/ to display the data on a PC connected to the same network as the PI. Configure a new data source in OpenCPN with configuration network, TCP, address of your pi (hostname works too!) and port 30007. Then after a while, openCPN will start receiving the data from your AIS receiver and display the ship positions.

Unfortunately, there is no pre-compiled version of OpenCPN for the Raspberry Pi. But you can compile one yourself, see instructions here: http://www.agurney.com/raspberry-pi/pi-chart

Another thing you can do with your Pi is to run a weather cam on your boat. That’s especially handy when you want to check how the weather looks like before you drive to your boat. For this, I wrote a little script that captures the pi cam image and uploads it to a cloud-based storage. Since this depends on the cloud service you are using, I’m only giving the outline here. It’s called capture.sh and goes into the pi user home directory, i.e., /home/pi/capture.sh

#!/bin/bash

raspistill -o webcamlarge.jpg

convert -geometry 1024×786 webcamlarge.jpg webcam.jpg

curl –upload-file webcam.jpg <url to upload file>

The last line, of course, needs to be changed to whatever upload method your cloud service supports.

To trigger this automatically every 5 minutes, one can use cron:

type “crontab –e” to edit your crontab

enter

*/5 * * * * /home/pi/capture.sh

into a new line in the crontab. in crontab lingo that means “every 5 minutes on every hour, every day, every month and every weekday, run /home/pi/capture.sh “

I will add a post on how to hook up other sensors such as thermo/hygro/barometer, use the existing nmea sensors such as the wind gauge and log etc. But that’s for another time.

Hope this helps,

H.

Posted in Computers, Electronics, Fun, Gadgets, Projects | Leave a comment

Pi-Top and Windows 10 IoT Core: A Raspberry Pi Laptop running Windows

Hi!

I’m in the middle of preparing a hands-on lab for an event next month, Microsoft Germany’s Technical Summit in Darmstadt. Here, we will show you how to build, customize, program and connect devices based on Windows 10 IoT. And for this hands-on lab, we decided to bring a couple of Raspberry Pi 3 to play around with.

In order to get the full benefit of Windows 10 IoT Core including its ability to run full UWP apps, you need a screen, mouse and keyboard. So I was looking for a nice package that includes all this, and I found the Pi-Top. This is essentially a notebook housing kit including power supply logic, touchpad, keyboard and screen, only lacking a Raspberry Pi and a bit of your time to turn it into a nice little notebook computer.

The remaining question was just: Would it run Windows 10 IoT Core? And yes, it does!

pi-top-running-windows-10-iot

The Pi-Top keyboard and touchpad are connected via USB, they work right out of the box, so does the built-in screen. The Pi-Top-Hub (in the picture on the left) powers the display and the Pi and converts the HDMI output of the PI to the signals needed by the display. It also controls the charging of the built-in battery and the screen brightness, even when it’s not connected to the Pi.

When it is connected to the pi, there’s a bit of randomness in the startup process. Occasionally, the Pi-Top hub gets some signals from the Pi, probably during initialization of the SPI ports, that it misinterprets as a screen brightness or power control command. In the worst case, this just cuts the power and the PI crashes. So right now the “safe” way of operating is not to plug in the cable connecting the pi-top-hub to the IO connector of the Pi.

But if it’s connected, then you can use the Windows.Devices.SPI api in Windows 10 IoT Core to control the Pi-Top hub, e.g. to control the screen brightness, to detect the power button press or the lid closure or monitor the battery. I’m still working on a sample that I will put on github once it’s ready.

H.

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Troubleshooting Azure IOT Hub connections on embedded Linux

Hi!

I’m in Japan for a few days, working with local partners to get their devices connections to Azure IoT Hub. And I want to share a few lessons learned.

We always started from the Azure IOT Hub SDK on GitHub.  And here’s the first catch: if you just download the zip file from GitHub, you are missing the links to other dependent projects and your source tree is lacking some files. To avoid running into these problems, please clone the project using git and don’t forget to add the –recursive option as described here.

git clone –recursive https://github.com/Azure/azure-iot-sdks.git

In case you get strange compiler errors on the way, such as mismatch of function signatures, it might be that your source tree is out of sync. One way to fix this is to run “git submodule init” and “git submodule update” in the right directories, but I often just throw away the whole tree and clone it again.

The first thing you should do is to familiarize yourself with the SDK on a normal Linux machine. For this purpose, I just run a Linux VM on Azure. Go through the steps of setting up the development environment and setting up an IoT hub, just for testing. The free tier of the Azure IoT Hub is sufficient at this point. Now create a device ID in your IoT Hub, e.g., by using the Azure IoT HubDevice Explorer on Windows. Under the management tab, select your created device and then right-click and select “Copy connection string”.

Go to the source code of one of the simple examples, e.g., the C amqp sample client. Insert your connection string in the source code and compile the sample. Now head back to the device explorer, click on the data tab and start monitoring data from your device. Then run the sample client executable. You should now see a few messages arriving. Now in device explorer, switch to the “Message to Device” tab, select your device and enable “Monitor Feedback endpoint”. Now type something in the message field and hit send. Your sample client should receive data and the feedback endpoint monitoring should indicate that the messages have been received.

Great, now let’s move over to the actual device!

Here, there are a couple of things you need to be aware of, the two most important ones are trust and time. Wait? What? Is this some relationship self-help blog? 🙂

The trust issue:

Unfortunately, some embedded devices do not come with the right set of certificate authorities installed. When the Azure IOT SDK client code tries to establish a secure connection, it validates the certificate presented by the IOT hub against the known certificate authorities. If there is none, the client code stays quiet for a very long time and then fails with various errors. In order to test for this condition, I often just use the openssl client program and try to establish the connection manually from the device.  Most embedded Linux distributions have the openssl executable installed together with the openssl library. An alternative is to run both the sample and “tcpdump -w capture.pcap” at the same time on the device, then download the pcap file and analyze it using wireshark.

For example, if I want to see if I can reach the mqtt endpoint of my IOT Hub, I run the following command:

openssl s_client -connect <My iothub name>.azure-devices.net:8883

(and of course replace <> with the name of your IOT hub)

If this command fails to establish a valid TLS connection with “Verify return code: 20”, you have “trust issues”. If you see “Verify return code: 0 (ok)” then everything is OK. In wireshark, you would see the TLS negotiation fail with “No CA”.

To resolve your trust issue, make sure the right CA certificate is present on the device. Microsoft uses the Baltimore CyberTrust CA to sign the server keys, so you should have the file “Baltimore_CyberTrust_Root.pem” somewhere in your file system. But even if it is there, the openssl library may not load it. To find out where it expects the files to be, just run “openssl version -d”. You should see something like this:

OPENSSLDIR: “/usr/lib/ssl”

This means that the OpenSSL library will look for the CA cert in the file /usr/lib/ssl/cert.pem and then in files in the directory /usr/lib/ssl/certs/

But it may be that the file is actually there but OpenSSL still fails to establish a secure connection. Then you might have a time issue.

The time issue:

CA certificates have a time span in which they are valid. For instance, the Baltimore CyberTrust CA openssl x509 is valid in the following time span:

Not Before: May 12 18:46:00 2000 GMT
Not After : May 12 23:59:00 2025 GMT

You can easily check for yourself by running this command:

openssl x509 -in /usr/lib/ssl/certs/Baltimore_CyberTrust_Root.pem -text

How could this be invalid? Easy: Some embedded devices have no battery-buffered realtime clock and initialize their clocks with preset dates on boot. And these may be ancient, e.g. Unix Epoch (January 1st, 1970), GPS epoch (January 6th, 1980) or whatever the manufacturer set. So a good practice is to set the clock to the right date before attempting to connect.

But that might not be enough.

The Azure IOT hub also uses a time-based token scheme to authenticate its clients. The process is described here. The token includes an expiry time as seconds since Unix Epoch in UTC. The Azure IOT SDK uses the device connection string to create such an shared access signature token. If your clock is off, the token created may already have expired. The tokens are generated with a validity of 3600 seconds, i.e., one hour. If your clock is late by more than that, the IOT hub will reject the connection.

So the best practice is to run ntpclient or ntpd on your embedded device. Even busybox has a simple ntpd implementation, so this should be available on your embedded os. Alternatives are of course to use GPS, a mobile network, a battery-powered RTC or a radio time receiver (FM RDS, long-wave time signals etc.) as a time source. But be aware of the startup and initialization times these time sources take (gps can take several minutes to give a proper time information) and the skew RTCs might accumulate over time. And RTC batteries might die after a couple of years. Also make sure that your time zone is properly set, the SDK will always calculate in UTC times, but if your timezone claims to be UTC but the clock is set to the local time zone, you might be off by a couple of hours.

Which brings me back to the CA cert validity. Today, 2025 seems to be far out in the future, but remember that many embedded devices designed today have a lifetime of over 10 years. So that CA cert will expire in the lifetime of these devices. So make sure you have a way to update the CA certificate.

Hope this helps,

H.

 

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Asia Tour: June 2016

Hi!

I’m on the way back from touring partners customers in Taiwan, South Korea and Japan. We had very interesting meetings with our partners there who are ready to get “things” connected to the cloud. In this post, I want to elaborate on the questions that were most common and how I answered them.

  • What if my device isn’t supported by the Azure IOT SDK? Can you please add support for device XXX, OS YYY and CPU ZZZ?

The Azure IOT SDK on GitHub https://github.com/Azure/azure-iot-sdks is already supporting many different device and operating system combination, but given the large number of possible combinations (including legacy devices that still need to be connected) this cannot cover everything. However, it is not required to use our SDK, it’s just there to make things easier for you and to get a head start.

So what if my device or OS isn’t on the compatibility list https://azure.microsoft.com/en-us/documentation/articles/iot-hub-tested-configurations/? Maybe the SDK actually works! If you have a Windows device that supports the .net Framework Version 4.5, this should be sufficient to run the C#-Versions. If you have a Linux-based OS, the C-version should work as long as you have a fairly recent GCC and OpenSSL version. The Java SE and Node versions should work on most underlying OS platforms that these runtimes support. So maybe you’re actually done.

But what if there’s a feature missing in my underlying platform, e.g. it does not support TLS1.2 that seems to be required for the SHA256 requirement? Technically, the SHA256 is required to generate a shared access signature from the device key you configure in IOT Hub for your device. But nothing keeps you from pre-computing a shared access signature with a long validity somewhere else and install it on the device. Maybe you could even implement a service that the device can connect to occasionally to request a new signature. (I actually have some code for this as part of my Azure IOT hub proxy I’ve explained here,  but that’s for another post.)

As an alternative, you could use an additional SSL library such as OpenSSL or wolfSSL to implement TLS1.2 and SHA256, the IOT SDK has the ability to link to these libraries. This would work independently of the crypto functions provided by your existing OS.

  • Can I use IOT Hub to manage my devices?

And I usually reply to this with another question: What is it you want to manage?

When you think about device management from an IT perspective, there is a common device management definition and there are plenty of solutions to address this. In this area, management means managing OS and application installation and updates, monitoring device usage and applying policy-based restrictions to the devices under management.

In IOT, it might be all of the above, a subset or none of the above.

For IOT devices, it is uncommon to re-install an operating system via device management. Instead, devices are often just replaced when they fail or reach their end of life. Even updates are managed more carefully, and there are still devices out there that never received an OS update in their entire device life. I’m not recommending this practice since the era of unconnected devices is essentially over and anything that’s connected can be attacked in some form, so implementing update mechanisms is more important than ever.

Monitoring devices is often very application specific in IOT and it’s often more a stream of events sent by the devices than common monitoring task such as the status of the antivirus software installed.

And although device policies exist also on embedded devices, they hardly change over the lifetime of the device.

So a full-fledged IT device management solution might be too much.

But as IOT hub provides a cloud to device messaging channel, that might be just enough to implement a simple, custom device management solution.

In addition, there is a preview of device management functions in IOT hub, but that will be another blog post soon.

 

Hope this helps,

H.

 

Posted in Uncategorized | Tagged , | Leave a comment

Upgrading my Medion akoya E1232T with an SSD.

For a while now, I’ve been using this little clamshell as my private traveling machine, I was dragging it along as far as Japan and in general, it never let me down. Granted, the battery lifetime isn’t great, the shrunk keyboard isn’t for writing a thesis and the 2 core baytrail Celeron isn’t the best-performing mobile CPU. But it’s tiny housing makes it fit into my A4-sized bag, it has a touch screen and with the 4G of main memory it even runs Visual Studio 2015 community at a decent speed. It also has an Ethernet port, a HDMI port, an SD-Card reader and an USB3 Host. And all this without adapters, dongles, port replicators etc. It’s even got 2.4 and 5GHZ WIFI and Miracast.  

The main drawback the machine has is its 500GB HGST spinning disk. But this was about to change.

So I found a 128 GB Sandisk SSD (Z400), 2.5’’ SATA at reichelt.de for a reasonable price. Its 7mm housing is the same size as the internal HGST drive, so I ordered one.

Now SanDisk offers a number of software packages that make your life easier with the SSD, the most important one is their SSD Dashboard. Inside, it also contains a link to a single use version of a harddisk to SSD Transfer software. So I downloaded the dashboard, hooked up the SSD to an USB to SATA converter and fired up the transfer software to check if this setup would be ok. But before making actual changes, I ran the disk2vhd tool from sysinternals to capture a full disk image of the internal harddisk to an external drive.

Now in order to do the actual transfer, I removed a lot of things from the old harddrive. I changed the OneDrive config to not keep anything local (down 20 GB), removed all local media files (down another 60GB), uninstalled some older versions of software (VS2010) and cleaned up my downloads directory. A very helpful tool for this process is windirstat that I just ran in a portable version. (I actually keep it around in the tools directory of my OneDrive.)

After having shrunk down the content of the C drive, I found that I still had a D drive that the transfer software insisted to move to the SSD. Now on the Medion akoya, that’s actually the recovery drive used to reset the machine to its initial state that it came in which was Windows 8. Now I never planned to go back to that, so I decided to remove the partition to save some precious SSD space. Note that if you do that, the recovery function of the notebook that’s triggered by holding down F11 upon boot will not work anymore. But I decided that I’ll be fine with using the build-in recovery mechanisms of Windows 10. But that’s up to everyone to decide for himself.

So I then fired up the transfer software and a few hours later, I had a SSD with the content of my harddrive. So I disconnected the SSD in its USB-Sata housing and then shut down the machine.

The disassembly process was actually very smoth and simple, essentially it was removing the screws on the bottom and then using my trusty iFixit Spudger to carefully pry open the plastic housing. After that, it was just two more screws to remove the harddisk frame, pulling out carefully the SATA cable and then a few more screws to take the harddisk out of the frame. I then mounted the SSD into the frame, fastened the screws, put the harddisk into its place in the housing, carefully attached the SATA cable, fastened the screws of the harddisk frame, then put on the plastic cover and tightened all the remaining screws. Needless to say I did all this with the machine shut down, the power supply disconnected and paying attention not to damage the LiPo battery since these can get rather nasty when punctured in the wrong spot. (make that: in any spot!).

Booting up the machine initially got me a boot failure (probably since Windows 10 actually doesn’t shut down on “shutdown”, but actually hibernates, but the “shrinking process” left the SSD with a stale hibernation image that Windows correctly refused to restore) but the second boot was all right and from then on everything worked as it should.

Or almost…

Working with the machine a few hours made me notice a strange behavior. A couple of times every hour the machine would “freeze” for a few seconds and almost do nothing. But the mouse cursor was still working (so no blocked interrupts) and even the GUI of some apps was still responsive, but other apps just froze. Even Windows would sometimes gray them out and show “not responding” dialogs.

And then I noticed that during this time, the HDD LED was full on. Not the usual flicker when the disk was working but just lighting up steadily. So I fired up the task manager and looked for processes with unusual activity. There were a few random processes that seemed to be “stuck” on I/O, but there was no clear pattern. So I switched over to the “Performance” tab and took a look at the Disk IO graph. And whenever the system behaved “frozen”, the disk activity percentage graph would be stuck at 100% busy while the throughput graph would show zero throughput. After a couple (10 to 40) seconds the activity would drop and the throughput would go up as if nothing had happened.

After watching this for a couple of days (and even seeing one or two bugchecks (AKA blue screens) during disk activity, I decided to involve Sandisk’s support.

After a couple of obvious starter questions (have you tried using a different SATA connection? No, I only have one in my notebook. Is the BIOS/OS/Drive Firmware up to date? Yes, I checked in your SanDisk Dashboard!) Sandisk recommended that I format the harddisk and reinstall everything. So I actually did what they asked me to do on the idea that maybe with the initial windows 8 install and the insider updates and then the final Windows 10 bits installed there was something “stuck” in the driver versions installed.

After re-installing Windows 10 (which amazingly worked without any major trouble, Windows recognized my already-activated Windows 10 license I got by upgrading the machine from Windows 8, I even did not have to install a single driver by hand since they now seem to all be available in Windows Update!) I started checking for the presence of the bug. And yes, it was still there, on my clean install machine. Here’s a screenshot of how this looks in the task manager: Disk 100% busy, no data transfer. In this case for about 45 seconds.

Neu installiert

So I started looking at the documentation of the Z400 drive at the Sandisk website. To be precise, it’s a SanDisk SD8SBAT128G1122Z 128G

Turns out, it’s actually not a consumer drive, it’s mostly meant for embedded OEM systems like point-of-sales terminals (aka cash registers). And then I dug some more and found a standalone firmware updater for the drive called “ssdupdater-z2201000-or-z2320000-to-z2333000”. Wait! Didn’t the Sandisk dashboard just tell me that there was no firmware update? But the same dashboard told me that the firmware revision of my drive was z2320000. OK, maybe the ssd dashboard does not know about these embedded drives and only knows about consumer drive firmware updates. So I downloaded and ran the standalone firmware updater and voila: The bug disappeared, no further bluescreens and the machine feels about 5 times faster than before.

So, my lessons learned for today: Don’t trust support too much, especially if going through consumer/end-user channels. You might have hardware they don’t even know about. And don’t trust their tools. You might get wrong answers.

To be precise, the Sandisk support was really quick to answer for a consumer query that came to them via a web form. The answers were professional and to the point without any useless chitchat, but if the answer isn’t available to them, they simply can’t help. So it would be great if either Sandisk could enhance their SSD dashboard tool to give correct answers or enhance their support database so that this bug can be found. Because I’m pretty sure it is documented somewhere in the bug list of firmware Z2320000 or the release notes of firmware z2333000.

Hope this helps,

H.

Posted in Computers | Leave a comment

AzureIotHubProxy

Today, I uploaded a project to github that I wrote in the last weeks in order to simplify things with the Azure IoT hub for demos, makers etc.

If you haven’t heard about Azure IoT hub, this is a very nice service you can use to hook up your IoT devices to a central service that you can use to receive data, send commands and, in general, manage your devices.

https://azure.microsoft.com/en-us/documentation/services/iot-hub/ is the official starting point for the documentation, but basically, the Azure IoT hub has a device and a service API. Through the device api, you can basically send messages to the cloud and receive messages from the cloud. The cool thing about this is that the device side only does outbound connections (e.g. this works through firewalls, through NAT devices such as DSL routers and even through IP connections provided by mobile phone providers. Read this again: Back channel to your device works through mobile phone network!

And the best thing: This service incudes a free tier that allows you to register 500 devices and send 8.000 Messages of 0.5k per day. See here https://azure.microsoft.com/en-us/pricing/details/iot-hub/ for details.

But in order to get to all this goodness, you need to manage the IOT hub via its service API. You can do that through the Device Explorer tool (see https://github.com/Azure/azure-iot-sdks/tree/master/tools/DeviceExplorer ) but that’s a manual process that involves creating devices on the hub and then copying the device connection strings manually into the device configuration. Or you can deal with the standard management API which is a bit tricky to use and actually would require you to keep the management keys where ever you would like to manage it.

Wouldn’t it be nice if the devices could actually manage themselves?

So I wrote a little API Proxy service that the device can query to get a connection string. The service just implements four calls.

GET /api/Device get just returns the list of devices configured in a JSON form

GET /api/Device/(id) returns the JSON just for this device

POST /api/Device/(id) creates a new device in the IOT hub and returns a JSON that includes a connection string

DELETE /api/Device/(id) deletes the device in the IOT hub.

In order to secure these, they all require an API key send in the query string.

The implementation I made is really simple and not very secure. But it can be used as a starting point to think about more complex authentication schemes, e.g. one could implement a one-time token mechanism that would only allow a single device registration for each token.

To try out the implementation, I added a swagger interface, so if you go to /swagger/ you can play around with the API yourself. You should disable that for production use.

The service can easily be run in an Azure Web App. And again, there is a free tier that is sufficient to run this service. See here https://azure.microsoft.com/en-us/pricing/details/app-service/ Azure app services also support SSL that you should use in order to protect your API key. (SSL is not supported for custom domains, in the free tier so your website will all end on “azurewebsites.net”)

To get started, clone the project from github https://github.com/holgerkenn/AzureIotHubProxy and then go to https://azure.microsoft.com/free/ to start a free trial on azure in case you don’t have a subscription yet. Through this link, you will also get some free credit to use the paid azure services for a limited time, but since everything presented here also works on the free tiers of the services, you can actually run all this even after the free trial credits expire.

Go to https://azure.microsoft.com/en-us/develop/iot/ to see how to create your first IoT hub, then get its connection string from the Azure Portal and add it to the web.config file in the repository. Then create a web app on Azure as explained here https://azure.microsoft.com/en-us/documentation/services/app-service/web/ and publish the service to this app. In Visual Studio, this is as simple as right-clicking the project, selecting publish and then “Microsoft Azure App Service”. This will then guide you to select or create a new Azure web app for your service. After the publish, your service should be up and running. And since the swagger api is enabled, you will find the trial api on “https://<yourservicename>.azurewebsites.net

Then you can go and compile the test client. Enter the name of your web app in program.cs. When you run it, it will connect to the service, create a device named “1234567” and send a few messages to the IoT hub. If you have device explorer connected, you can receive those messages and send a few back.

And now you should probably change that default API key (“1234”) and republish.

Hope this helps,

H.

Posted in Uncategorized | Tagged , | Leave a comment