My MSDN Blog posts are now here as well

Since I’m waiting for my work machine to install the newest insider build of Windows 10, I decided to polish the old blog a bit. So I decided to add a plugin to my blog that will pull the posts from the MSDN blog into this one. From now on, they will automatically show up in the Microsoft category.

And then I decided to push a few things to my github repository as well.


Posted in Microsoft | Leave a comment

Goodbye Facebook

So finally, after being on Facebook for, well as long as Facebook existed, I decided to deactivate my account today. I still remember the time when you actually needed an .edu or otherwise academic e-mail address to register, that must have been in April or May 2004 when I was still at Jacobs International University Bremen.

So after about 10 years, I think it’s been a great time but since I haven’t looked at anything there since last year and also did not miss it too much, I decided to get rid of the dormant account.

In addition I realized that anybody who wants to contact me can find me via any search engine. And as far as my professional life is concerned, linkedin, xing and twitter seem to be more useful.


Posted in Technology | Leave a comment

My azure scripts on github



I’ve decided to put my azure scripts on Github, that keeps them in one place and I can update whenever I find bugs. 

I have more scripts in the queue, but I first need to remove credentials, hostnames etc. before I put them on github.

Hope it helps,


Source: msdn

Posted in Microsoft | Leave a comment

Linux and Azure Files: you might need some help here…



tl;dr: To mount Azure Files from linux, you need cifs support in the kernel, the right mount helper and versions recent enough to supports the SMB2 protocol version.

I just got a ping from a customer who had trouble mounting an Azure Files filesystem from Linux. According to the Azure team blog, this should work: 

So I tried it myself on a Ubuntu 14.04 LTS and found the following:

If I used smbclient, everything worked:

kenn@cubefileclient:~$ smbclient -d 3 // <storage key goes here> -U cubefiles -m SMB2
[lots of debug output deleted here]
Connecting to at port 445
Doing spnego session setup (blob length=0)
server didn’t supply a full spnego negprot
Got challenge flags:
Got NTLMSSP neg_flags=0x628a8015
NTLMSSP: Set final flags:
Got NTLMSSP neg_flags=0x60088215
NTLMSSP Sign/Seal – Initialising with flags:
Got NTLMSSP neg_flags=0x60088215
Domain=[X] OS=[] Server=[]
smb: > dir

  .                                       D        0  Mon Sep  8 14:49:55 2014
  ..                                      D        0  Mon Sep  8 14:49:55 2014
  testdir                             D        0  Mon Sep  8 14:47:08 2014
                83886080 blocks of size 65536. 83886080 blocks available
Total bytes listed: 0
smb: > quit

Don’t be alarmed by all those scary looking messages, I’m running smbclient with –d 3, so there are a lot of debug messages.

Now I tried to mount the filesystem:

kenn@cubefileclient:~$ sudo bash
root@cubefileclient:~# mount -t cifs \ /mountpoint -o vers=2.1,username=cubefiles,password=<storage key goes here>,dir_mode=0777,file_mode=0777
mount: wrong fs type, bad option, bad superblock on,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog – try
       dmesg | tail  or so

OK, this did not work.

So let’s check if the cifs filesystem is actually in the kernel.

root@cubefileclient:~# grep cifs /proc/filesystems
nodev   cifs

Yes, looks good.

So is there a mount helper for cifs?

root@cubefileclient:~# ls -la /sbin/mount.cifs
ls: cannot access /sbin/mount.cifs: No such file or directory

That’s it! we’re missing the mount helper!

root@cubefileclient:~# apt-get install cifs-utils

root@cubefileclient:~# mount -t cifs \ /mountpoint -o vers=2.1,username=cubefiles,password=<storage key goes here>,dir_mode=0777,file_mode=0777

root@cubefileclient:~# mount
\ on /mountpoint type cifs (rw)

root@cubefileclient:~# ls /mountpoint/

So this is great, and I thought this was the bug our customer was hitting. But I was wrong: Even with installing the mount helper nothing worked. Even the smbclient did not work for him.

So I recreated his setup (based on Suse Enterprise 11) and I saw the following:

cubefileclient2:~ # smbclient -d 3 // <storage key goes here> -U cubefiles -m SMB2
[lots of debug output deleted here…]
protocol negotiation failed: NT_STATUS_PIPE_BROKEN

And also the mount failed.

So I decided to look at what’s going on on the wire. I opened up a second ssh window to the VM and ran tcpdump on the second terminal while attempting to connect to Azure Files in the first. ( tcpdump –s 65535 –w tcpdump.pcap port 445  to be precise)

Since the output of tcpdump wasn’t too enlightening, I decided to load the output using Microsoft Network Monitor and look at the packets there. (To load the capture files from tcpdump, make sure they have the extension .pcap) And then it was quite obvious:

In Ubuntu 14.04 LTS:


In Suse Enterprise 11:


The SMB2 protocol was missing. So I started looking at the version numbers of smbclient, the cifs mount helper and the kernel.


cubefileclient2:~ # smbclient -V
Version 3.6.3-0.54.2-3282-SUSE-CODE11-x86_64
cubefileclient2:~ # uname -a
Linux cubefileclient2 3.0.101-0.35-default #1 SMP Wed Jul 9 11:43:04 UTC 2014 (c36987d) x86_64 x86_64 x86_64 GNU/Linux
cubefileclient2:~ # mount.cifs -V
mount.cifs version: 5.1

root@cubefileclient:~# smbclient -V
Version 4.1.6-Ubuntu
root@cubefileclient:~# uname -a
Linux cubefileclient 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
root@cubefileclient:~# mount.cifs -V
mount.cifs version: 6.0

So here’s the solution: The Suse Enterprise 11 images contain a cifs implementation both in the kernel and in smbclient that hasn’t the SMB2 protocol implemented yet. And Azure files requires SMB2 otherwise the protocol negotiation will fail.

One closing remark: Please check the date when this was posted, software versions change all the time and what is described here may not be accurate anymore when you read this. I’m not posting this to point to any specific bugs or to promote one distribution over the other. It’s just a fact of life that one cannot support everything with every single version of an OS or service, this post is intended to give you ideas what to look for and give you some tools to debug low-level system behavior. And of course one could have checked the version numbers first or looked for protocol version negotiation mismatches in the debug output. But when I have no clue what to look for, I found it sometimes helpful to start with the lowest level and work my way up until I find something. 

Hope this helps,

Source: msdn

Posted in Microsoft | Leave a comment

Attacks from Mars! Azure ILB and Linux


tl;dr: Azure ILB and Linux IP spoofing protection prevent a connection from a machine to itself via the ILB.

A few days ago, I talked to a customer who had quite some trouble using the Azure Internal Load Balancer with his Linux VMs.

From his tests, he concluded that ILB “is broken”, “is buggy” and “is unstable”. What he observed is the following:

– He created two linux VMs in a virtual network on Azure: Machine A on IP address and Machine B on IP Address And then he set up an internal load balancer for HTTP with the following configuration: Input address, output and, source, destination and probe ports all set to 80.

Then he opened a SSH connection to Machine A ( and observed the following behavior:

root@ILB-a:~# curl

root@ILB-a:~# curl
<body> B </body> </html>

So it seems that only every second connection worked. Or to be more precise, whenever the ILB was forwarding the connection to the machine he was working on, the connection failed.

So I recreated this setup and tried for myself, this time looking at the tcpdump output for the case when the connection did not work:

root@ILB-a:~# tcpdump -A  port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
10:32:05.210953 IP > Flags [S], seq 265927267, win 29200, options [mss 1460,sackOK,TS val 1471701 ecr 0,nop,wscale 7], length 0

10:32:05.216395 IP > Flags [S], seq 265927267, win 29200, options [mss 1418,sackOK,TS val 1471701 ecr 0,nop,wscale 7], length 0

10:32:06.210783 IP > Flags [S], seq 265927267, win 29200, options [mss 1460,sackOK,TS val 1471951 ecr 0,nop,wscale 7], length 0

10:32:06.212291 IP > Flags [S], seq 265927267, win 29200, options [mss 1418,sackOK,TS val 1471951 ecr 0,nop,wscale 7], length 0


It looked like the ILB was forwarding the packets (it’s actually just rewriting the destination IP and port, as you can see the rest of the packet just stays the same.) And then the linux kernel would just drop the packet. Turns out this behavior is actually a clever idea. But why?

Because the packet the linux kernel sees in its interface input queue could actually never get there in the first place! It carries the local IP address both in its source and destination address. So if the network stack would want to send a packet to the local host, it would have never been sent into the network but would have been handled in the network stack already, much like the packet to localhost ( So any incoming packet with a local IP address as source must therefore be evil, probably a spoofing attack directed at some local service that would accept local connections without further authentication. So dropping this packet is actually clever.

But how can we prove that this is actually the case? Fortunately, there’s a kernel runtime configuration switch which enables logging of such packets. And here’s where the title of this post comes from: the configuration is called log_martians. This can be set globally (echo 1 > /proc/sys/net/ipv4/conf/all/log_martians) or for a specific interface (e.g. echo 1 > /proc/sys/net/ipv4/conf/eth0/log_martians). The kernel then logs these events to syslog and can be seen by running dmesg.

In syslog, these packets show up like this:

Sep 15 11:05:06 ILB-a kernel: [ 8178.696265] IPv4: martian source from, on dev eth0
Sep 15 11:05:06 ILB-a kernel: [ 8178.696272] ll header: 00000000: 00 0d 3a 20 2a 32 54 7f ee 8f a6 3c 08 00        ..: *2T….<..

Conclusion: The linux kernel default behavior drops any packet that seems to be coming from a local IP but shows up in the network input queue. And that’s not the ILBs fault. The ILB works just fine as long as you don’t connect to it from a machine that’s also a potential destination for the ILB.

Fortunately, this limitation rarely occurs in real life architectures. As long as the clients of an ILB and the servers load-balanced by the ILB are distinct (as they are in the Galera example in this blog) the ILB just “works”. In case you actually have to “connect back” to the same server, you have to either build a workaround with a timeout and retry in the connection to the ILB or reconfigure the linux kernel to allow such packets in the input queue. How to do that with the kernel configuration parameters, I leave as an exercise to the reader. 😉

Hope this helps,

Source: msdn

Posted in Microsoft | Leave a comment

Running a MySQL Galera cluster on Microsoft Azure


A few weeks ago, I was looking into running a MySQL Galera Cluster for a customer with a large Linux IAAS deployment on Azure.

Why that? There’s ClearDB, a Microsoft Partner that offers MySQL on Azure as SaaS (Software as a service), so you can go to and pick your size. Or, if you want to run it on your own, you can pick a Ubuntu Linux Gallery image and type “apt-get install mysql-server” and that’s it, right? Well, not so fast…

ClearDB is a great offering for most customers that need a MySQL backend, but in this case, even the largest ClearDB offer was not sufficient.

So the customer followed the second path down, he created an IAAS VM (actually several VMs which each run an independent database server for different purposes) and configured his services to use these databases via the internal IP addresses of these servers. But there’s one problem with this approach: Occasionally, Azure needs to deploy patches to the host systems running these VMs. And occasionally, the Linux VMs also need patches that require a restart of the database server or a reboot of the machines. Whenever this happened, the customer site would be down for a few minutes. 

To avoid this occasional downtime, I teamed up with Oli Sennhauser, CTO at FromDual and my colleague Christian Geuer-Pollmann to set up a MySQL Galera Cluster on Azure.

Such a cluster consists of three MySQL VMs. Database connections can be handled by all three machines, so the DB (read) load is distributed as well. As long as two machines are up, the database service is available. Galera achieves this by implementing the replication of database write transactions. More information can be found on and on 

So, here’s the tl;dr version of what we did:

– Set up three Ubuntu 14.04 LTS IAAS VMs with fixed internal IP addresses
– Set up an Azure internal load balancer so that database clients have a single IP they connect to
– Installed mysql-server-wsrep-5.6.16-25.5 and galera-25.3.5 plus a few dependencies
– Configured galera on these three machines
– Added a bit of iptables magic, courtesy of FromDual, to the VMs to block access to the MySQL port while a database server is recovering. The internal load balancer then moves the clients to the other servers of the cluster in case one is down.
– And in order to keep this all neat and clean, we used Powershell to automate the Azure setup part.

0. Prerequisites

The fixed internal IP and the internal load balancer make use of features that were only added to the Azure virtual network quite recently. Chances are that if you configured an Azure virtual network a while ago, these function may not be available. So just configure a new virtual network for this.

Currently, some of these features can only be configured via Powershell. So you need a (windows) machine to run powershell on, if you don’t have one handy, just create a small (A1) Windows server machine in the Azure portal and use RDP to connect to it. Then install the Azure Powershell, see here.

And you should do a bit of planning ahead for your new virtual network. It should have sufficient IP addresses to host all your database clients, the three servers of the cluster and the additional IP input address of the load balancer. In this case, we used the default setting but placed all the database servers in the subnet.

1. Creating the machines and the internal load balancer

As said before, we scripted all this in powershell. And in order to keep the configuration apart from the actual commands, we set a bunch of variables in the header of our script that contain the actual settings. So when you see $servicename in the examples below, that is something we’re setting in this header.

The Load balancer is created by this powershell command:

Add-AzureInternalLoadBalancer -ServiceName $servicename -InternalLoadBalancerName $loadBalancerName –SubnetName $subnetname –StaticVNetIPAddress $loadBalancerIP

When running this command, we found that the service needs to be deployed before running this command. So in order to ensure this, we just created a small toy IAAS VM, then created the loadbalancer and the database VMs and then removed the toy VM again.

To configure a VM to use the internal load balancer, we add an endpoint to the VM configuration:

Add-AzureEndpoint `
            -Name mysql `
            -LocalPort 3306 `
            -PublicPort 3306 `
            -InternalLoadBalancerName $loadBalancerName `
            -Protocol tcp `
            -ProbePort 3306 `
            -ProbeProtocol “tcp” `
            -ProbeIntervalInSeconds 5 `
            -ProbeTimeoutInSeconds 11 `
            -LBSetName mysql

Since we have multiple Linux VMs in the same cloud service, we need to remove the standard SSH endpoint and create an individual SSH endpoint for each machine:

Remove-AzureEndpoint `
            -Name SSH `
            | `
Add-AzureEndpoint `
            -Name SSH `
            -LocalPort 22 `
            -PublicPort $externalSshPortNumber `
            -Protocol tcp `

And we want to use a static internal IP for each machine since we need to specifiy these IP adresses in the galera configuration:

Set-AzureSubnet -SubnetNames $subnetname `
            | `
Set-AzureStaticVNetIP -IPAddress $machineIpAddress `
            | `

We wrapped all this into a configuration function called GetCustomVM. So here’s the complete script:

 1: #

 2: # Set up three VMs for a Galera Cluster

 3: #



 6: # Azure Cmdlet Reference

 7: #


 9: $subscriptionId     = "<your subscription ID here>"

 10: $imageLabel         = "Ubuntu Server 14.04 LTS"       # One from Get-AzureVMImage | select Label

 11: $datacenter         = "West Europe" # change this to your preferred data center, your VNET and storage account have to be set up there as well

 12: $adminuser          = "<your linux user name here>"

 13: $adminpass          = "<a linux password>"

 14: $instanceSize       = "ExtraSmall" # ExtraSmall,Small,Medium,Large,ExtraLarge,A5,A6,A7,A8,A9,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4

 15: $storageAccountName = "<the storage account name for the vm harddisk files>"

 16: $vnetname           = "<the name of your vnet>"

 17: $subnetname         = "<the name of the subnet for the database servers>"


 19: $loadBalancerName   = "galera-ilb" # should be changed if there are multiple galera clusters  

 20: $loadBalancerIP     = ""


 22: $servicename        = "<your service name>" # all machines will be created in this service

 23: $availabilityset    = "galera-as" # should be changed if there are multiple galera clusters  


 25: #

 26: # Calculate a bunch of properties

 27: #

 28: $subscriptionName = (Get-AzureSubscription | `

 29:     select SubscriptionName, SubscriptionId | `

 30:     Where-Object SubscriptionId -eq $subscriptionId | `

 31:     Select-Object SubscriptionName)[0].SubscriptionName


 33: Select-AzureSubscription -SubscriptionName $subscriptionName -Current


 35: $imageName = (Get-AzureVMImage | Where Label -eq $imageLabel | Sort-Object -Descending PublishedDate)[0].ImageName


 37: $storageAccountKey = (Get-AzureStorageKey -StorageAccountName $storageAccountName).Primary


 39: $storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey


 41: #

 42: # Fix the local subscription object

 43: #

 44: Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $storageAccountName 



 47: #

 48: # This function encapsulates the configuration generation of a single new Galera VM

 49: #

 50: Function Get-CustomVM

 51: {

 52:     Param (

 53:         [string]$customVmName, 

 54:         [string]$machineIpAddress, 

 55:         [int]$externalSshPortNumber,

 56:         [string] $storageAccountName = $storageContext.StorageAccountName

 57:         )


 59:     # 

 60:     # configure the VM object

 61:     #

 62:     $vm = New-AzureVMConfig `

 63:             -Name $customVmName `

 64:             -InstanceSize $instanceSize `

 65:             -ImageName $imageName `

 66:             -AvailabilitySetName $availabilityset `

 67:             -MediaLocation "https://$$customVmName-OSDisk.vhd" `

 68:             -HostCaching "ReadOnly" `

 69:             | `

 70:         Add-AzureProvisioningConfig `

 71:             -Linux `

 72:             -LinuxUser $adminuser `

 73:             -Password $adminpass `

 74:             | `

 75:         Set-AzureSubnet -SubnetNames $subnetname `

 76:             | `

 77:         Set-AzureStaticVNetIP -IPAddress $machineIpAddress `

 78:             | `

 79:         Remove-AzureEndpoint `

 80:             -Name SSH `

 81:             | `

 82:         Add-AzureEndpoint `

 83:             -Name SSH `

 84:             -LocalPort 22 `

 85:             -PublicPort $externalSshPortNumber `

 86:             -Protocol tcp `

 87:             |`

 88:         Add-AzureEndpoint `

 89:             -Name mysql `

 90:             -LocalPort 3306 `

 91:             -PublicPort 3306 `

 92:             -InternalLoadBalancerName $loadBalancerName `

 93:             -Protocol tcp `

 94:             -ProbePort 3306 `

 95:             -ProbeProtocol "tcp" `

 96:             -ProbeIntervalInSeconds 5 `

 97:             -ProbeTimeoutInSeconds 11 `

 98:             -LBSetName mysql


 100:     $vm

 101: }


 103: #

 104: # 0. Create cloud service before instantiating internal load balancer

 105: #

 106: if ((Get-AzureService | where ServiceName -eq $servicename) -eq $null) {

 107:     Write-Host "Create cloud service"

 108:     New-AzureService -ServiceName $servicename -Location $datacenter

 109: }


 111: #

 112: # 1. Create a dummyVM with an external endpoint so that the internal load balancer (which is in preview) is willing to be created

 113: #

 114: $dummyVM = New-AzureVMConfig -Name "placeholder" -InstanceSize ExtraSmall -ImageName $imageName `

 115:     -MediaLocation "https://$" -HostCaching "ReadWrite" `

 116:     | Add-AzureProvisioningConfig -Linux -LinuxUser $adminuser -Password $adminpass `

 117:     | Set-AzureSubnet -SubnetNames $subnetname `

 118:     | Set-AzureStaticVNetIP -IPAddress "" 


 120: New-AzureVM -ServiceName $servicename -VNetName $vnetname -VMs $dummyVM 


 122: #

 123: # 2. Create the internal load balancer (no endpoints yet)

 124: #

 125: Add-AzureInternalLoadBalancer -ServiceName $servicename -InternalLoadBalancerName $loadBalancerName –SubnetName $subnetname –StaticVNetIPAddress $loadBalancerIP

 126: if ((Get-AzureInternalLoadBalancer -ServiceName $servicename) -ne $null) {

 127:     Write-Host "Created load balancer"

 128: }


 130: #

 131: # 3. Create the cluster machines and hook them up to the ILB (without mentioning "-Location $datacenter -VNetName $vnetname ", because the $dummyVM pinned these already

 132: #

 133: $vm1 = Get-CustomVM -customVmName "galera-a" -machineIpAddress "" -externalSshPortNumber 40011

 134: $vm2 = Get-CustomVM -customVmName "galera-b" -machineIpAddress "" -externalSshPortNumber 40012

 135: $vm3 = Get-CustomVM -customVmName "galera-c" -machineIpAddress "" -externalSshPortNumber 40013

 136: New-AzureVM -ServiceName $servicename -VMs $vm1,$vm2,$vm3


 138: #

 139: # 4. Delete the dummyVM

 140: #

 141: Remove-AzureVM -ServiceName $servicename -Name $dummyVM.RoleName -DeleteVHD







Now the load balancer and the three VMs are created.

2. Install and configure Galera on the three VMs

We took the galera .deb packages from and 

In these packages, we found a few minor glitches that collided with the Ubuntu 14.04 LTS we installed them on. The first glitch was that mysql-server-wsrep-5.6.16-25.5-amd64.deb has a configured dependency on mysql-client. And Ubuntu sees this satisfied with the mysql-client-5.5 package it uses as default, but this creates a version conflict. So I downloaded the .deb and modified its dependency to point to mysql-client-5.6 by following The second glitch was the fact that the default my.cnf contains the path /var/log/mysql/error.log which does not exist on Ubuntu. This created the strange situation that the server process would not start but just create two mysterious entries in syslog. Running strace on the server process showed the path it was trying to access, and once I created it everything worked fine. Another glitch in the package was that is was missing an upstart script for mysql, instead it had just a classic /etc/init.d shell script which confused upstart.  So I took one from a standard mysql-server-5.6 package and everything worked out well.

The steps to set up Galera were:

$ apt-get install mysql-client-5.6
$ apt-get install libssl0.9.8
$ dpkg -i galera-25.3.5-amd64.deb
$ dpkg –force-depends -i mysql-server-wsrep-5.6.16-25.5-amd64.modified.deb
$ mkdir /var/log/mysql
$ chown mysql /var/log/mysql

and put the standard upstart script from mysql-server-5.6 into the upstart config directory.

The next part was to configure the galera cluster function. As you can see in the script above, we have created three machines with the internal IP addresses, and For this, we need to set a few things in the default my.cnf

wsrep_cluster_name=”<your cluster name here>”

These settings are the same in all three machines. On each of the machines, we can now set a human readable node name, eg.g

wsrep_node_name=’Node A’

In the next step, we configured the actual clustering, i.e., we told each machine where to find the replication partners.

On machine, we set the following line in my.cnf:


This allows this database node to come up even if there is no replication partner.

Then we started the server on

Then we set the following line in my.cnf on


and started the server on

Then we set the following line in my.cnf on


and started the server on

Now we went back to and changed the line to:


and restarted the server. Now the galera cluster was configured.

Instead of changing the configuration of the initial node twice, one can also directly start the server process and add the configuration setting to the command line, e.g. mysqld_safe wsrep_cluster_address=”gcomm://”. This is a good workaround if for whatever reason the cluster was fully shut down and needs to be brought up manually again.

Since the internal load balancer was already configured before, we can now use the ILB input IP address to connect to the cluster. So the clients use to connect to the cluster. And with each new TCP connection, the load balancer chooses one of the running nodes and connects the client to it.

There is one additional issue that may confuse clients in one specific situation. Imagine that one of the nodes just failed and is about to start up again. In this state, the database server can be accessed but does not yet have data replicated from the other nodes. In this state, although the clients can connect, all database commands will fail. If clients aren’t prepared to handle this situation, this may show up as database errors in applications. But there’s a solution: FromDual has implemented a small shell script that uses the Linux iptables firewall to deny access to the server while it is in this state. The load balancer then finds it cannot access the TCP port and reroutes the request to another running cluster node.

To run the script whenever a replication state change occurs, another line is added to my.cnf:

wsrep_notify_cmd = /usr/local/bin/

The script and the instructions for setting this up can be found here: Don’t be alarmed by the fact that it talks about hardware load balancers, it works the same with the (software-based) Azure internal load balancer.

Hope this helps,


Source: msdn

Posted in Microsoft | Leave a comment

Azure from the Linux command line (part 2)


About a month ago, I wrote the first blog post of this series where I have shown how to set up the xplat-CLI (Cross Platform CLI) on Linux and I described how to create IAAS VMs on Azure.

But the approach described had one important drawback: It creates the VMs with a default user and password, but not with an SSH key set up to login.

So let me fix this here.

When you’re familiar with SSH on unix platforms, the usual pattern is to use ssh-keygen to create a key pair, then push the public key into the ~/.ssh/authorized_keys file on the remote host and keep the private key in your ~/.ssh/id_rsa file. When using the same user name on both sides, the command ssh <remotehost> then just works without entering a password. And so does scp, sftp and (in case you have set the rsync_rsh environment variable to ssh in your login script) rsync.  And as you have probably used an empty keyphrase for the secret key, this works nicely from scripts. (And of course I don’t recommend using that empty keyphrase in general, especially not for privileged accounts) 

On Microsoft Azure, we have an internal key deployment mechanism that is used for multiple things, it can deploy keys into Windows and Linux VMs, into PAAS roles and so on. And this mechanism is also used to deploy your ssh public key into your IAAS VMs. But in order to work, it needs the keys in a common universal file format. So just generating the keys using ssh-keygen won’t work. Instead, you can use openssl  to generate the private and public key files in x.509 der format.

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem
$ chmod 600 myPrivateKey.key
$ openssl  x509 -outform der -in myCert.pem -out myCert.cer

The first line generates the key pair, as you have probably guessed from the command line parameters, it’s a 2048 bit RSA keypair with a lifetime of 365 days. Again, you can create this key without a passphrase, but that might be a security risk.

Remember the bash script line to create a VM from part one:

$ azure vm create -e -z extrasmall -l “West Europe” $1 $IMAGENAME azureuser “$PASSWORD”

Now let’s modify this to use the newly generated key in addition to the password:

$ azure vm create -e -t myCert.pem -z extrasmall -l “West Europe” $1 $IMAGENAME azureuser “$PASSWORD”

This creates the VM, but this time, azureuser gets a pre-configured authorized_key.

There is one difference when doing a ssh into this VM: you need to specify the key to use as authorization and the remote user name:

$ ssh -i myPrivateKey.key <cloudservicename>

And now you’re not asked for a password anymore.

The -i option also works for scp and sftp. For rsync, you can use

$ export RSYNC_RSH=”ssh -i /path/to/myPrivateKey.key”

or use the rsync –rsh “ssh -i /path/to/myPrivateKey.key” command line option to specify the remote shell and identity file to use.

Hope it helps,



Source: msdn

Posted in Microsoft | Leave a comment

Azure from the Linux command line (part 1)


Since i’ve been gowing up IT-wise with a unix command shell, I tend to do a lot of things with it. Also managing my Azure deployments since there’s the great Azure command line interface or cross platform (“xplat”) CLI.

(If you’re interested in the details, this is all open source, released under an Apache license, and on github:

This blog post documents a few tricks I’ve been using to get up and running fast.

First: You need to connect the xplat cli to your azure subscription. To do that simply run

$ azure download

after installing the cli. If you’re on a remote machine via ssh, this will simply give you an URL to launch in your browser. Make sure you’re already logged into the azure portal, otherwise you will need to login first when going to this URL.

The website will now give you a .publishsettings file for download. The same file is used when setting up a connection between Visual Studio and an Azure subscription.

Now get this file to your linux box (and make sure you keep it safe in transit, this file contains a management certificate key that can manage your subscription!) and import it into xplat cli:

$ azure account import <publishsettingsfile>

And now you’re all set.

Now let’s look around

$ azure help

info:    Executing command help
info:             _    _____   _ ___ ___
info:            /_  |_  / | | | _ __|
info:      _ ___/ _ __/ /| |_| |   / _|___ _ _
info:    (___  /_/ _/___|___/|_|____| _____)
info:       (_______ _ _)         _ ______ _)_ _
info:              (______________ _ )   (___ _ _)
info:    Windows Azure: Microsoft’s Cloud Platform
info:    Tool version 0.7.4
help:    Display help for a given command
help:      help [options] [command]
help:    Open the portal in a browser
help:      portal [options]
help:    Commands:
help:      account        Commands to manage your account information and publish settings
help:      config         Commands to manage your local settings
help:      hdinsight      Commands to manage your HDInsight accounts
help:      mobile         Commands to manage your Mobile Services
help:      network        Commands to manage your Networks
help:      sb             Commands to manage your Service Bus configuration
help:      service        Commands to manage your Cloud Services
help:      site           Commands to manage your Web Sites
help:      sql            Commands to manage your SQL Server accounts
help:      storage        Commands to manage your Storage objects
help:      vm             Commands to manage your Virtual Machines
help:    Options:
help:      -h, –help     output usage information
help:      -v, –version  output the application version

That does not look to bad after all. Just remember azure help <command>,this is your first stop whenever you get stuck.

So let’s set up a linux VM. First let’s check what pre-configured linux images are available.

$ azure vm image list

Now you should see a lot of images. When I just ran this, I got more that 200 lines of output. Image names look like this:


Now we could copy this name to our clipboard and paste it into the next command, but let’s have the shell do that for us, here’s the idea:

IMAGENAME=`azure vm image list |grep -i Ubuntu-13_10-amd64-server |tail -1 | awk ‘{print $2}’`

Get the list of VM images, just select the ones we’re interested in, then select the last (i.e. the most recent one) of that list and just give me the second string which is the image name. Easy, right? Note the back single quotes in the beginning and the end of that line, this is shell syntax for “take the output of that command and store it in that shell environment variable”.

To use the VM, we need to login, so let’s use a password for now:

echo Password is $PASSWORD

Next, let’s create the VM:

azure vm create -e -z extrasmall -l “West Europe” $1 $IMAGENAME azureuser “$PASSWORD”

Here’s the output of running this shell script:

$ bash contosolinux

Password is AtotallySECRET!PA55W0RD
info:    Executing command vm create
+ Looking up image b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_10-amd64-server-20140226-en-us-30GB
+ Looking up cloud service
+ Creating cloud service
+ Retrieving storage accounts
+ Creating VM
info:    vm create command OK

And after about two minutes I can ssh into as “azureuser” with that super secret password.

Hope it helps,


ps: to get rid of the VM again, I just type azure vm delete -b contosolinux

pps: in case that’s too harsh, azure vm shutdown contosolinux, azure vm start contosolinux and azure vm restart contosolinux work as well. And azure vm list shows you what Azure thinks your VMs are doing right now.

ppps: And in case you were wondering why there was no root password set: just run sudo bash from this initial user account.


Source: msdn

Posted in Microsoft | Leave a comment

Setting up a Linux FTP server on Windows Azure


This post is about hosting FTP in a Linux VM on Windows Azure. And here’s a Spoiler Alert: The catch is that you may need to set the TCP keepalive timeout in the Linux Kernel to support very long FTP transfer times through the Azure load balancer. But I’ll get to that.

A few weeks ago, a customer needed to run their FTP server on Windows Azure. Being familiar with Linux and having a pretty complex proftpd configuration, the customer decided to keep this all on Linux.

So let’s recall again what’s so special about FTP:

  • FTP uses two connections, a control connection that you use for sending commands to a server and a data connection that gets set up whenever there is data to be transferred.
  • FTP has two ways to set up such a data connection: active and passive. In passive mode, the client opens a second connection to the same server but on a different port. In active mode, the client creates a listening port, then server opens a connection to this port on the client.
  • And in the world of client systems behind firewalls and NAT devices, the active mode inevitably fails since hardly any client is still accessible from the public internet and can just open a listening port that is reachable from the public internet.
  • Lucky enough, most off-the-shelf FTP clients including the ones in web browsers default to passive mode.
  • There are some funny things you can do with FTP, e.g. FXP, where one FTP server in active mode directly transfers to another ftp server in passive mode.

And recall what’s special about Windows Azure networking:

  • Every connection from the outside to an Azure VM goes through a cloud service endpoint. There are no “exposed hosts”.

So in order to have the “passive” data connections reach their destination, one has to configure a bunch of endpoints in the Azure configuration and then tell the FTP server to use these endpoints for incoming data connections. One could configure each of those endpoints manually through the Windows Azure portal, but that’s time-consuming and error-prone. So let’s use a script to do that… (I’m using the Linux command line tools from )

$ azure vm endpint create contosoftp 21

$ for ((i=20000;i<20020;i++)); do azure vm endpoint create contosoftp $i; done

This creates 20 endpoints for the FTP data connections and the usual port 21 endpoint for the control connection.

Now we need to tell proftpd (or any other FTP daemon of your choice) to use exactly this port range when opening data connection listening sockets.

in /etc/proftpd/proftpd.conf

PassivePorts 20000 20019

As you may know, Windows Azure VMs use local IP addresses that are non public. In order to tell the client what IP address to talk to when opening the data connection, the FTP server needs to know its external, public IP address, i.e., the address of its endpoint. Proftpd has all the required functionality, it just needs to be enabled via the MasqueradeAddress directive


And that’s it.

Now the customer used this configuration, but once in a while, a customer reported that a very long-running FTP transfer would not go through but break because of a “closed control connection”.

After thinking a bit, we thought this is a side effect of the Windows Azure load balancer that is managing the endpoints. When the load balancer does not see traffic for a while (at the earliest after about 60 seconds) it may “forget about” an established tcp connection. In our case, the control connection of the ongoing data transfer was idle while the data connection was happily pumping data.

Lucky enough, there’s a unix socket option called “TCP Keepalive” which will make idle but open connections send a few control packets to inform everything on the network that this connection is still in use. And proftpd (from version 1.3.5rc1 on) supports a “SocketOptions keepalive on” directive to enable this behavior on its connections. Great!

But even enabling this didn’t solve the issue, since there is a default in the Linux kernel for when these keepalive packets are first sent:

$ cat /proc/sys/net/ipv4/tcp_keepalive_time


OK, that’s 7200 seconds which is two hours. That’s a bit long for our load balancer.

# sysctl -w net.ipv4.tcp_keepalive_time=60

That’s better. But remember this is a runtime setting in the Linux kernel, so in order for it to survive reboot, put it into a convenient place in /etc/rc*.d/

Hope this helps,








Source: msdn

Posted in Microsoft | Leave a comment

Hello, World!

~# apt-get install hello

~# hello -n

? Hello, World! ?

About two months ago, I switched jobs. In my last job, I worked as an applied researcher and software development engineer at ATL Europe, an applied Microsoft lab that is part of Microsoft Research.

So here I am, working for Developer and Platform evangelism at Microsoft Germany. I’ll focus on a couple of things that I’ve been dealing with in the past, these are

  • Windows Azure
  • Open Source Software, especially Linux on Azure and
  • the “Internet of Things”.

I’ll record my findings in this blog, both from my own experiences and from my work with partners. I’ll blog whenever I learn something that I think will help others.

But please remember: I’m writing these posts at a particular point in time. Hardware, Software, Services and Devices all evolve over time and what may be true at the time of writing may be different at the time you read this.



ps: you can find my old personal blog and some info about myself on


Source: msdn

Posted in Microsoft | Leave a comment