Samba

Planet Samba

Here you will find the personal blogs of Samba developers (for those that keep them). More information about members can also be found on the Samba Team page.

January 20, 2015

Andreas

New uid_wrapper with full threading support.

Today I’ve released a new version of uid_wrapper (1.1.0) with full threading support. Robin Hack a colleague of mine spent a lot of time improving the code and writing tests for it. It now survives funny things like forking in a thread. We also added two missing functions and fixed several bugs. uid_wrapper is a tool to help you writing tests for your application.

If you don’t know uid_wrapper and wonder what you can do with it, here is an example:

$ id
uid=1000(asn) gid=100(users) groups=100(users),478(docker)
$ LD_PRELOAD=libuid_wrapper.so UID_WRAPPER=1 UID_WRAPPER_ROOT=1 id
uid=0(root) gid=0(root) groups=0(root)

More details about uid_wrapper can be found on the cwrap project website, here.

flattr this!

January 20, 2015 05:06 PM

January 09, 2015

Michael

vagrant with lxc and libvirt on fedora

I recently got interested in Vagrant as a means for automating setup of virtual build and test environments, especially for my Samba/CTDB development, since in particular setup of clustered Samba servers is somewhat involved, and being able to easily produce a pristine new test and development environment is highly desirable.

It took some time for me to get it right, especially because I did not choose the standard virtualbox hypervisor but wanted to stick to my environment that uses lxc for Linux and libvirt/kvm for everything else, but also to some extent because I am now using Fedora as a host and also for many Linux boxes, and I had to learn that vagrant and lxc don’t seem to play best with Fedora. Since others might profit from it as well, I thought I’d publish the results and write about them. This post is the first in a series of articles leading up an environment where vagrant up on a fedora host provisions and starts, e.g., a 3(or 4 or …)-node Samba-CTDB cluster in LXC. This first post describes the steps necessary to run vagrant with libvirt and lxc backends on Fedora 21.

Vagrant concepts

There is extensive documentation at docs.vagrantup.com, so just a few introductory words here… Vagrant is a command line program (written in ruby) that executes simple (or also more complicated) recipes to create, provision, start, and manage virtual machines or containers. The main points are disposability and reproducibility: Call vagrant up and your environment will be created out of nothing. Destroy your work environment with vagrant destroy after doing some work, and it is gone for good. Call vagrant up on the same or on a different host and there it is again in a pristine state.

Vagrantfile

The core of a vagrant environment is the Vagrantfile, which is the recipe. The Vagrantfile specifies the resulting machine by naming a virtualization provider (usually), a base image (called box in vagrant lingo) and giving instructions for further provisioning the base box. The command vagrant up executes these steps. Since the Vagrantfile is in fact a ruby program snippet, the things you can do with it are quite powerful.

Boxes

The base boxes are a concept not entirely unlike docker images. There are many pre-created boxes available on the web, that can be downloaded and stored locally, but there is also Hashicorp’s atlas, which offers a platform to publish boxes and which is directly integrated with vagrant, comparable to what the docker hub is for docker.

Providers

Vagrant was created with virtualbox as virtualization provider, but nowadays also supports docker and hyper-v out of the box. My personal work environment consists mostly of libvirt/kvm for windows and non-linux unix systems and lxc for linux, so vagrant does not immediately seem to fit. But after some research I found out that, luckily, these providers do exist externally already: vagrant-libvirt and vagrant-lxc. These and more providers like aws, vmware and native kvm can be easily installed by virtue of vagrant’s plugin system.

Plugins

Vagrant can be extended via plugins which can add providers but also other features. Two very interesting plugins that I’ve come across and would recommend to install are vagrant-cachier, which establishes a host cache for packages installed inside VMs, and vagrant-mutate, which converts boxes between different provider formats — this is especially useful for converting some of the many available virtualbox boxes to libvirt.

Provisioning

The provisioning of the VMs can be done by hand with inline or external shell scripts, but Vagrant is also integrated with puppet, ansible, chef and others for more advanced provisioning.

The vagrant command

The central command is vagrant. It has a nice cli with help for subcommands. Here is the list of subcommands that I have used most:

vagrant box      - manage boxes
vagrant plugin   - manage plugins
vagrant init     - initialize a new Vagrantfile
vagrant up       - create/provision/start the machine
vagrant ssh      - ssh into the machine as user vagrant
vagrant suspend  - suspend the machine
vagrant resume   - resume the suspended machine
vagrant halt     - stop the machine
vagrant destroy   - remove the machine completely

All data that vagrant maintains is user-specific and stored by default under ~/.vagrant.d/, e.g. plugins, required ruby gems, boxes, cache (from vagrant-cachier) and so on.

Vagrant onto Fedora

Fedora does not ship Vagrant, but there are plans to package vagrant — not sure it will make it for f22. One could install vagrant from git, but there is also an RPM on Vagrant’s download site and it installs without problems. So let’s use that for a quick start.

It is slightly irritating at first sight that the RPM does not have any dependencies though it should require ruby and some gems at the very least. But the vagrant RPM ships its own ruby and a lot of other stuff even some precompiled system libraries under /opt/vagrant/embedded, so you can start running it without even installing ruby or any gems in the system. This vagrant is configured to always use the builtin ruby and libraries, and as you will see, this does create some problems.

At this stage, you are good to go when you are using virtualbox (which I am not). For example, there is a very new fedora21-atomic image on atlas jimmidyson/fedora21-atomic. All it takes to fire up a fedora box with this base box is:

vagrant box add jimmidyson/fedora21-atomic --provider virtualbox
mkdir -p ~/vagrant/test
cd ~/vagrant/test
vagrant init jimmidyson/fedora21-atomic
vagrant up

Installing vagrant-lxc

It is actually fairly easy to install vagrant-lxc: the basic call is

vagrant plugin install vagrant-lxc

But there are some mild prerequisites, because the plugin installer wants to install some ruby-gems, in this case json. The vagrant installed from the official RPM installs the gems locally even if a matching version is present in the system, because this vagrant only ever uses the ruby stuff installed under /opt/vagrant/embedded and under ~/.vagrant.d/gems/. For vagrant-lxc, this is not a big deal, we only need to install make and gcc, because the json gem needs to compile some C file. So these are the complete steps needed to install the vagrant-lxc provider:

sudo yum install make gcc
vagrant plugin install vagrant-lxc

Afterwards, vagrant plugin list prints

$ vagrant plugin list
vagrant-lxc (1.0.1)
vagrant-share (1.1.4, system)

Now in order to make use of it, apart from having lxc installed, we need network bridges set up on the host, and we need applicable boxes. For the network, the easiest is to use libvirt network setups, since at least on Fedora, libvirt is set to default to virbr0 anyways. So my recommendation is:


sudo yum install lxc lxc-extra lxc-templates
sudo yum install libvirt-daemon-config-network

This libvirt network component can even be installed when using virtualbox, but if you are planning to use libvirt/kvm anyways, it is a perfect match to hook the lxc containers up to the same networks as the kvm machines, because they can then communicate without further ado.

The task of getting boxes is not that easy. There are really not many of them around: You can search for the lxc provider on atlas and find a dozen or so, mostly ubuntu and some debian boxes by the author of vagrant-lxc. So I needed to create some Fedora boxes myself and this was not completely trivial, in fact it was driving me almost crazy, but this is a separate story to be told. The important bit is that I succeeded and started out with Fedora 21 and 20 boxes which I published on atlas.

So to spin up a f21 lxc box, this is sufficent:

vagrant box add obnox/fedora21-64-lxc
mkdir -p ~/vagrant/lxc-test/
cd -p ~/vagrant/lxc-test/
vagrant init obnox/fedora21-64-lxc
vagrant up --provider=lxc

I find it entertaining to have sudo watch lxc-ls --fancy running in a separate terminal all the time. After bringing up the machine you can vagrant ssh into the machine and get stuff done. The working directory where your Vagrantfile is stored is bind-mounted into the container as /vagrant so you can exchange files.

Vagrant defaults to virtualbox as a provider, which is why we have to specify --provider=lxc. In order to avoid it, one can either set the environment variable VAGRANT_DEFAULT_PROVIDER to the provider of choice, or add config.vm.provider :lxc to the Vagrantfile. One can also add a block for the provider to enter provider-specific options, for instance to set the lxc container name to be used. Here is an example of a marginally useful Vagrantfile:

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end 

  config.vm.define "foo"
  config.vm.box = "obnox/fedora21-64-lxc"
  config.vm.provider :lxc do |lxc|
    lxc.container_name = "vagrant-test-007"
  end
  config.vm.hostname = "test-007"
  config.vm.provision :shell, inline: "echo Hello, world!"
end

Note the conditional configuration of vagrant-cachier: If installed, users of the same base box will benefit from a common yum cache on the host. This can drastically reduce machine creation times, so better make sure it is installed:

vagrant plugin install vagrant-cachier

Installing vagrant-libvirt

Now on to use vagrant with libvirt. In principle, it is as easy as calling

vagrant plugin install vagrant-libvirt

But again there are a few prerequisites and surprisingly also pitfalls. As mentioned above, the vagrant-libvirt installer wants to install some ruby gems, specifically ruby-libvirt, even when a matching version of the gem is already installed in the system. In addition to make and gcc, this plugin also needs the libvirt-devel package. Now the plugin installation failed when I tried to reproduce it on a pristine system with a strange error: The linker was complaining about certain symbols not being available:

"gcc -o conftest -I/opt/vagrant/embedded/include/ruby-2.0.0/x86_64-linux -I/opt/vagrant/embedded/include/ruby-2.0.0/ruby/backward -I/opt/vagrant/embedded/include/ruby-2.0.0 -I. -I/vagrant-substrate/staging/embedded/include -I/vagrant-substrate/staging/embedded/include -fPIC conftest.c -L. -L/opt/vagrant/embedded/lib -Wl,-R/opt/vagrant/embedded/lib -L/vagrant-substrate/staging/embedded/lib -Wl,-R/vagrant-substrate/staging/embedded/lib -lvirt '-Wl,-rpath,/../lib' -Wl,-R -Wl,/opt/vagrant/embedded/lib -L/opt/vagrant/embedded/lib -lruby -lpthread -lrt -ldl -lcrypt -lm -lc"
/lib64/libsystemd.so.0: undefined reference to `lzma_stream_decoder@XZ_5.0'
/lib64/libxml2.so.2: undefined reference to `lzma_auto_decoder@XZ_5.0'
/lib64/libxml2.so.2: undefined reference to `lzma_properties_decode@XZ_5.0'
/lib64/libsystemd.so.0: undefined reference to `lzma_end@XZ_5.0'
/lib64/libsystemd.so.0: undefined reference to `lzma_code@XZ_5.0'
collect2: error: ld returned 1 exit status
checked program was:
/* begin */
1: #include "ruby.h"
2:
3: int main(int argc, char **argv)
4: {
5: return 0;
6: }
/* end */

This drove me nuts for quite a while, since no matter which libraries and programs I installed or uninstalled on the host, it would still fail the same way. The explanation is that there is the system-installed lzma library that libxml2 uses and that uses symbol versioning. But the vagrant RPM ships its own copy in /opt/vagrant/embedded/lib/liblzma.so.5.0.7, so with all the linker settings to the gcc call, this supersedes the system-installed one and the symbol dependencies fail. In the end, I found the cure comparing one system that worked and another that didn’t: The gold linker can do it, while the legacy bfd linker can’t. Spooky…

So finally here is the minimal set of commands you need to install vagrant-libvirt on Fedora 21:

sudo yum install -y vagrant_1.7.1_x86_64.rpm
sudo yum install -y make gcc libvirt-devel
sudo alternatives --set ld /usr/bin/ld.gold
vagrant plugin install vagrant-libvirt

Of course, in order for this to be really useful, one needs to install libvirt properly, I do

yum install libvirt-daemon-kvm

because I want to use the kvm backend.
Afterwards, you can bring up a fedora box like this:

vagrant box add jimmidyson/fedora21-atomic --provider libvirt
mkdir -p ~/vagrant/test
cd ~/vagrant/test
vagrant init jimmidyson/fedora21-atomic --provider libvirt
vagrant up

As already mentioned it is a good idea to install the vagrant-cachier and vagrant-mutate plugins:

vagrant plugin install vagrant-cachier
sudo yum install -y qemu-img
vagrant plugin install vagrant-mutate

With the mutate plugin you can convert some of the many virtualbox boxes to libvirt.

For the fun of it, here is the vagrantfile, I used to develop and verify the minimal installation procedure inside vagrant-lxc… ;-)

VAGRANTFILE_API_VERSION = 2 

INSTALL_VAGRANT = <<SCRIPT
set -e
yum install -y /vagrant/vagrant_1.7.1_x86_64.rpm
yum install -y make gcc
sudo -u vagrant vagrant plugin install vagrant-lxc
yum install -y libvirt-devel
alternatives --set ld /usr/bin/ld.gold
sudo -u vagrant vagrant plugin install vagrant-libvirt
sudo -u vagrant vagrant plugin install vagrant-cachier
yum install -y qemu-img
sudo -u vagrant vagrant plugin install vagrant-mutate
sudo -u vagrant vagrant plugin list
SCRIPT

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end 

  config.vm.define "vagrant-test" do |machine|
    machine.vm.box      = "obnox/fedora21-64-lxc"
    machine.vm.hostname = "vagrant-test"
    machine.vm.provider :lxc do |lxc|
      lxc.container_name = "vagrant-test"
    end
    machine.vm.provision :shell, inline: INSTALL_VAGRANT
  end
end

Summary

So now we are in the position to work with vagrant-lxc and vagrant-libvirt on fedora, and we also have fedora lxc boxes available. I am really intrigued by the ease of creating virtual machines and containers. From here on, the general and provider-specific guides on the net apply.

Here is a mock-up transcript of the steps taken to set up the environment:

# use current download link from https://www.vagrantup.com/downloads.html
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.rpm
sudo yum install vagrant_1.7.2_x86_64.rpm
sudo yum install make gcc libvirt-devel qemu-img
sudo yum install lxc lxc-extra lxc-template
sudo yum install libvirt-daemon-kvm

vagrant plugin install vagrant-lxc
vagrant plugin install vagrant-libvirt
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-mutate

vagrant box add obnox/fedora21-64-lxc
vagrant box add obnox/fedora20-64-lxc
vagrant box add jimmidyson/fedora21-atomic --provider libvirt
vagrant box add purpleidea/centos-7.0
vagrant box add ubuntu/trusty64
vagrant box repackage ubuntu/trusty64  virtualbox 14.04
mv package.box trusty64.box
vagrant mutate trusty64 libvirt
...

What next?

I will post follow-up articles about the problems creating Fedora-lxc-boxes and about the general Vagrantfile mechanism for bringing up complete samba-ctdb clusters. I might also look into using vagrant from sources, potentially looking into fedora packaging, in order to circumvent the discomfort of running a bundled version of ruby, the universe, and everything… ;-)

January 09, 2015 04:26 PM

January 08, 2015

Andreas

Taking your bike on a plane

Here is a totally computer unrelated post! Several people asked me how do I protect my bike to transport it safely on a plane. One possibility is to use a bike box, but the issue with a box is that airline personal likes big boxes, because they can use it to pile a lot of suitcases on it. I prefer to just wrap it with cardboard! Normally I go to a supermarket and ask if they have some cardboard for me. I’m sure they are happy to get rid of some. What you need bring from home in addition is a piece of rope, zip ties, duct tape, an old towel and a multitool or cutter.

I prepare everything at the supermarket. Cut the cardboard for the different pieces of the bike, put holes in the cardboard for the zip ties (first put duct tape on the cardboard then make the hole trough the duct tape and the cardboard!). Make sure you can still push the bike, the wheels should turn around. In the end I have a small package like this:

Bike on a plane, cardboard collection

It is easy to transport. Either on the back of the bike or on your back ;)

At the airport you remove the pedals and fix the crank. Put the towel over the saddle and fix it with duct tape or a piece of rope. Tie a rope from the saddle to the handle bar so you can carry it. This makes it also easier for the airport personal to lift it. Then cover the bike with cardboard. Some parts are fixed on the bike with zip ties so they can’t move. In the end it normally looks like this:

Bike on a plane, protected with cardboard

If you’re on a bike trip you normally have 4 bike panniers with you, but the airline only allows to carry one luggage. Either the airport offers to wrap them or you go to a grocery store and buy plastfoil for food. It is very thin so you need 60m. It is not really Eco-friendly but the only way I know. Suggestions are welcome.

First start to wrap the two biggest panniers:

bike on a plane, panniers.

I use a rope to connect them and make sure not to loose one. After the two are in a good shape (ca. 25m) put the smaller panniers on the side and start to wrap the whole thing:

bike on a plane, wrapped panniers

Have a safe trip!

flattr this!

January 08, 2015 06:46 PM

December 20, 2014

Michael

taming the thinkpad’s terrible touchpad

After many years of working with X-series thinkpads, I have come to love these devices. Great keyboard, powerful while very portable and durable and so on. But I am especially an addict of the trackpoint. It allows me to use the mouse from time to time without having to move my fingers away from the typing position. The x230 was the first model I used that additionally features a touchpad. Well, I hate these touchpads! I keep moving the mouse pointer with the balls of my thumbs while typing, which is particularly irritating since I have my system configured to “focus-follows-mouse”. Now with the x230 that was not a big problem, because the I could simply disable the touchpad in the BIOS and keep using the trackpoint and the three mouse buttons that are positioned between keyboard and touchpad. So far so good.

Since three weeks now, since my start at Red Hat, I am using an x240. It is again really nicely done. Great display, very powerful, awesome battery life, … But Lenovo has imho committed an unspeakable sin with the change to the touchpad: The sparate mouse buttons are gone, and instead there are soft keys integrated into regions of the touchpad. Not only are the buttons much harder to hit, since the areas are much harder to feel with the fingertips than the comparatively thick buttons of the earlier models, but the really insane consequence for me is that I can’t disable the touchpad in the BIOS, since that also disables the buttons! This rendered the laptop almost unusable unless docked, with external mouse and keyboard. It was a real pain. :-(

But two days ago GLADIAC THE SAVIOR gave the the decisive hint: Set the TouchpadOff option of synaptics to value 1. Synaptics is, as I learned, the Xorg X11 touchpad driver. And this option disables the touchpad except for the button functionality. Exactly what I need. With a little bit of research I found out that my brand new Fedora 21 supports this out of the box. Because I am still finding my way around fedora, I only needed to find the proper place to add the option. As it turns out,

/usr/share/X11/xorg.conf.d/50-synaptics.conf

is the appropriate file, and I added the option to the section of “Lenovo TrackPoint top software buttons”.
Here is the complete patch that saved me:

--- /usr/share/X11/xorg.conf.d/50-synaptics.conf.ORIG 2014-12-18 22:53:18.454197721 +0100
+++ /usr/share/X11/xorg.conf.d/50-synaptics.conf 2014-12-19 09:03:44.143825508 +0100
@@ -57,13 +57,14 @@
# Some devices have the buttons on the top of the touchpad. For those, set # the secondary button area to exactly that. # Affected: All Haswell Lenovos and *431* models # # Note the touchpad_softbutton_top tag is a temporary solution, we're working # on a more permanent solution upstream (likely adding INPUT_PROP_TOPBUTTONPAD) Section "InputClass" Identifier "Lenovo TrackPoint top software buttons" MatchDriver "synaptics" MatchTag "touchpad_softbutton_top" Option "HasSecondarySoftButtons" "on" + Option "TouchpadOff" "1" EndSection

Now I can enjoy working with the undocked thinkpad again!

Thanks gladiac! :-)

And thanks of course to the developers of the synaptic driver…

December 20, 2014 11:21 PM

December 19, 2014

Michael

tmux with screen-like key-bindings

I recently starting switching from screen to tmux for my daily workflow, partly triggered by the increasing use of tmate for pair-programming sessions.

For that purpose I wanted the key-bindings to be as similar as possible to the ones I am used to from screen, which mostly involving the prefix (hotkey) from Ctrl-b to Ctrl-a. This is achieved in the awesome tmux configuration file ~/.tmux.conf by

set-option -g prefix C-a

Now in screen, you can send the hotkey through to the application by typing Ctrl-a followed by a plain a. I use this frequently, e.g. for sending Ctrl-a to the shell prompt instead of pos1. Tmux offers the send-prefix command specifically for this purpose, which can be bound do a key. My ~/.tmux.conf file already contained

bind-key a send-prefix

According to the tmux manual page, this complete snippet should make it work:


set-option -g prefix C-a
unbind-key C-b
bind-key C-a send-prefix

but it was not working for me! :-(

Entering the command bind-key C-a send-prefix in tmux interactively (command prompt triggered with Ctrl-a :) worked though. After experimenting a bit, I found out that the order of options seems to matter here: Once I put the bind-key a send-prefix line before the set -g prefix C-a one, it started working. So here is the complete snippet that works for me:


bind-key C-a send-prefix
unbind-key C-b
set-option -g C-a

I am not sure whether this is some speciality with my new fedora. On a debian system, the problem does not seem to exist...

In order to complete the screen-like mapping of Ctrl-a, let me mention that bind-key C-a last-window lets double Ctrl-a toggle between the two most recent tmux session windows. So here is the complete part of my config with respect to setting Ctrl-a as the hot-key:


bind-key C-a send-prefix
unbind-key C-b
set-option -g prefix C-a
bind-key C-a last-window

Note that the placement of the last-window setting does not make a difference.

December 19, 2014 11:27 AM

December 10, 2014

David

Accénts & Ümlauts - A Custom Keyboard Layout on Linux

table#t01 { width: 100%; border: 1px solid black; border-collapse: collapse; } table#t01 td { padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } table#t01 tr:nth-child(even) { background-color: #eee; } table#t01 tr:nth-child(odd) { background-color: #fff; } table#t01 th { background-color: black; color: white; padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } As a native English speaker living in Germany, I need to be able to reach the full Germanic alphabet without using long key combinations or (gasp) resorting to a German keyboard layout.

Accents and umlauts on US keyboards aren't only useful for expats. They're also enjoyed (or abused) by a number of English speaking subcultures:

  • Hipsters: "This is such a naïve café."
  • Metal heads: "Did you hear Spın̈al Tap are touring with Motörhead this year?"
  • Teenage gamers: "über pwnage!"

The standard US system keyboard layout can be enhanced to offer German characters via the following key mappings:
Key Key + Shift Key + AltGr (Right Alt) Key + AltGr + Shift
e E é É
u U ü Ü
o O ö Ö
a A ä Ä
s S ß ß
5 %

With openSUSE 13.2, this can be configured by first defining the mappings in /usr/share/X11/xkb/symbols/us_de:
partial default alphanumeric_keys
xkb_symbols "basic" {
name[Group1]= "US/ASCII";
include "us"

key <AD03> {[e, E, eacute, Eacute]};
key <AD07> {[u, U, udiaeresis, Udiaeresis]};
key <AD09> {[o, O, odiaeresis, Odiaeresis]};
key <AC01> {[a, A, adiaeresis, Adiaeresis]};
key <AC02> {[s, S, ssharp, ssharp]};
key <AE05> {[NoSymbol, NoSymbol, EuroSign]};

key <RALT> {type[Group1]="TWO_LEVEL",
[ISO_Level3_Shift, ISO_Level3_Shift]};

modifier_map Mod5 {<RALT>};
};

Secondly, specify the keyboard layout as the system default in /etc/X11/xorg.conf.d/00-keyboard.conf:
Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "us_de"
EndSection

Achtung!: IBus may be configured to override the system keyboard layout - ensure this is not the case in Ibus Preferences:
Once in place, the key mappings can be easily modified to suit specific tastes or languages - viel Spaß!

December 10, 2014 08:56 PM

October 30, 2014

Michael

A nice little chat on IRC

Day changed to 30 Okt 2014
(13:09) <metze> 217b4b60fd602bad569e7e06adabe308e23c5807
(13:22) <obnox> 105724073300af03eb0835b3c93d9b2e2bfacb07
(13:22) <obnox> rw = brl_get_locks_internal(talloc_tos(), fsp, false);
(13:28) <obnox> 5d8f64c47d02c2aa58f3f0c87903bbd41d086aa0
(13:30) <metze> 77d2c5af511d60b3437b9cfa2113283ed2aa6194
(13:33) <obnox> 29bf737
(13:34) <obnox> d0290a9
(13:34) <obnox> 4e9b4a5
(13:34) <obnox> 7a6cf35
(13:53) <metze> dd33241d29122721edd35e2d53992939ef556c1a
(13:57) <obnox> 5633861c4b6f22b09a9fa0bf91fdeb43b4f3558b
...

:-D

October 30, 2014 03:44 PM

October 29, 2014

Andreas

resolv_wrapper 1.0.0 – the new cwrap tool

I’ve released a new preloadable wrapper named resolv_wrapper which can be used for nameserver redirection or DNS response faking. It can be used in testing environment to route DNS queries to a real nameserver separate from resolv.conf or fake one with simple config file. We tested it on Linux, FreeBSD and Solaris. It should work on other UNIX flavors too.

resolv_wrapper

You can download resolv_wrapper here.

flattr this!

October 29, 2014 11:24 AM

October 23, 2014

Michael

Powershell Cheat Sheet

Here are a few Powershell commands I used for testing and analysis of SMB3 Multi-Channel:

Get-SmbConnection
Get-SmbMapping
Get-SmbClientNetworkInterface
Get-SmbServerNetworkInterface
Get-SmbMultiChannelConnection [-IncludeNotSelected ]
Update-SmbMultiChannelConnection

Especially the “-IncludeNotSelected” to Get-SmbMultiChannelConnection is
useful when debugging connection problems with Multi-Channel setups.

Interestingly, by appending a filter pipe to these commands, one can
produce more output, e.g.:

Get-SmbMultiChannelConnection | fl * | more

(fl is synonymous Filter-List)
This seems slightly strange at first but is actually quite handy.

October 23, 2014 09:21 PM

October 22, 2014

Michael

Demo of SMB3 multi-channel with Samba

Version 3 of the SMB protocol was introduced by by Microsoft with Windows Server 2012. One of the compelling features is called “multi-channel” and gives the client the possiblitly to bind multiple transport connections to a single SMB session, essentially implementing client-controlled link aggregation at the level of the SMB protocol. The purpose of this is (at least) twofold: Increasing throughput, since the client can spread I/O load across multiple physical network links, and fault-tolerance, because the SMB session will be functional as long as one channel is still functional.

A Samba-implementation of multi-channel is still work in progress, but already rather advanced.

Here is a screencast demo of how that already works with latest WIP code:

Demo of SMB3 Multi-Channel with Samba from Michael Adam on Vimeo.

The original video can also be downloaded from my samba.org space:
download Video

Note on implementation

One of the challenges in Samba’s implementation was the 1:1 correspondence between TCP connections and smbd processes in Samba’s traditional design. Now to avoid the hassle of handling operations spread across multiple processes for one session, maybe even on one file, we implemented a mechanism to transfer a tcp socket from one smbd to another. We don’t transfer the socket in the SessionSetup call that binds the connection as a channel to the existing session, but we already transfer it in the NegotiateProtocol call, i.e. the first SMB request on the new connection. This way, we don’t need to transfer any complicated state, but only the socket. We find the smbd process to pass the connection to based on the ClientGUID, an identifier that a client sends if it speaks SMB 2.1 or newer. So we have effectively established a per-ClientGUID single-process model.

Here is a graphic of how establishing a multi-channel session works in Samba:

Samba Multi-Channel design

Note on performance

At first sight this single-process design might seem to result in a bad performance penalty when compared to samba’s original mechanism of smbd child processes corresponding 1:1 to TCP connections. But this is not the case since the smbd process fans out to multiple CPUs by using a pool of worker threads (pthread_pool) to do the I/O operations (most notably reads and writes).

The code

Some preparations are already upstream in Samba’s master branch, like fd-passing using the new unix-datagram messaging system and the preparation for having multiple TCP connections in one smbd by introduction of the smbXsrv_client structure.

The full code used in the demos can be found in Stefan Metzmacher’s and my master3-multi-channel branches on git.samba.org:

https://git.samba.org/?p=metze/samba/wip.git;a=shortlog;h=refs/heads/master3-multi-channel

https://git.samba.org/?p=obnox/samba/samba-obnox.git;a=shortlog;h=master3-multi-channel

October 22, 2014 10:38 PM

October 21, 2014

Michael

git: rebasing all commits of a branch

I have been searching for this feature a bit, so I note it down here for easier retreival…

Interactive rebasing (git rebase -i) is one of the most awesome things about git. Usually my call pattern is git rebase -i COMMIT1. This will present all commits of the current branch after COMMIT1 in the rebase editor. I.e. with this syntax, you always need the parent commit of the first commit you want to see in rebase. But sometimes one needs to interactively rebase all commits, e.g. when preparing a new branch before publishing it.

After playing with inserting a “x false” as the topmost line in the rebase editor, which works, I now found that git (of course :-) ) has a syntax for this:

git rebase -i --root

Great!

October 21, 2014 11:28 AM

October 07, 2014

Andreas

A talk about cwrap at LinuxCon Europe

Next week is the LinuxCon Europe in Düsseldorf, Germany. I will be there and give a talk about cwrap, a set of tools to make client/server testing easy on a single machine. Testing network applications correctly is hard. This talk will demonstrate how to create a fully isolated network environment for client and server testing on a single host, complete with synthetic account information, hostname resolution, and privilege separation.

I hope you will attend my talk if you are there. If you can’t attend the LinuxCon Europe, but you’re going to the Linux Plumbers Conference then say hello and lets talk about cwrap there!

At the LinuxCon Europe I will announce new cool stuff and the website will be updated. So you should check

http://cwrap.org/

next week!

cwrap talk

flattr this!

October 07, 2014 01:31 PM

October 06, 2014

David

Samba and Snapper: Previous Versions with Windows Explorer

Snapper is a neat application for managing snapshots atop a Btrfs filesystem.

The upcoming release of Samba 4.2 will offer integration with Snapper, providing the ability to expose snapshots to remote Windows clients using the previous versions feature built into Windows Explorer, as demonstrated in the following video:

The feature can be enabled on a per-share basis in smb.conf, e.g.:
...
[share]
vfs objects = snapper
path = /mnt/btrfs_fs

The share path must correspond to a Btrfs subvolume, and have an associated Snapper configuration. Additionally, Snapper must be configured to grant users snapshot access - see the vfs_snapper man page for more details.

Many thanks to:
  • Arvin Schnell and other Snapper developers.
  • My colleagues at SUSE Linux, and fellow Samba Team members, for supporting my implementation efforts.
  • Kdenlive developers, for writing a great video editing suite.

October 06, 2014 08:45 PM

September 04, 2014

Andreas

How to get real DNS resolving in ‘make test’?

As you might know I’m working (hacking) on Samba. Samba has a DNS implementation to easier integrate all the AD features. The problem is we would like to talk to the DNS server but /etc/resolv.conf points to a nameserver so your machine is correctly working in your network environment. For this Samba in our dns resolver library we implemented a way to setup a dns_hosts_file to fake DNS queries. This works well for binaries provided by Samba but not for 3rd party application. As Günther Deschner and I are currently working on MIT Kerberos support the libkrb5 library always complained that it is not able to talk query the DNS server to find the KDC. So it was time to really fix this!

I’ve sat down and did some research how we get this working. After digging through the glibc code, first I thought we could redirect the fopen(“/etc/resolv.conf”) call. Well as this is called in a glibc internal function it directly calls _IO_fopen() which isn’t weak symbol. So I looked deeper and recognized that I have access to the resovler structure which holds the information to the nameserver. I could simply modify this!

It was time to implement another wrapper, resolv_wrapper. Currently it only wraps the functions required by Samba and MIT Kerberos, res_(n)init(), res_(n)close, res_(n)query and res_(n)search. With this I was able to run kinit which asks the DNS server for a SRV record to find the KDC and it worked. With Jakub Hrozek I cleaned up the code yesterday and we created a parser for a resolv.conf file.

Here is a tcpdump of the kinit tool talking to the DNS server with socket_wrapper over IPv6.

resolv_wrapper will be available on cwrap.org soon!

flattr this!

September 04, 2014 07:32 AM

August 22, 2014

Jelmer

Autonomous Shard Distributed Databases

Distributed databases are hard. Distributed databases where you don't have full control over what shards run which version of your software are even harder, because it becomes near impossible to deal with fallout when things go wrong. For lack of a better term (is there one?), I'll refer to these databases as Autonomous Shard Distributed Databases.

Distributed version control systems are an excellent example of such databases. They store file revisions and commit metadata in shards ("repositories") controlled by different people.

Because of the nature of these systems, it is hard to weed out corrupt data if all shards ignorantly propagate broken data. There will be different people on different platforms running the database software that manages the individual shards.

This makes it hard - if not impossible - to deploy software updates to all shards of a database in a reasonable amount of time (though a Chrome-like update mechanism might help here, if that was acceptable). This has consequences for the way in which you have to deal with every change to the database format and model.

(e.g. imagine introducing a modification to the Linux kernel Git repository that required everybody to install a new version of Git).

Defensive programming and a good format design from the start are essential.

Git and its database format do really well in all of these regards. As I wrote in my retrospective, Bazaar has made a number of mistakes in this area, and that was a major source of user frustration.

I propose that every autonomous shard distributed databases should aim for the following:

  • For the "base" format, keep it as simple as you possibly can. (KISS)

    The simpler the format, the smaller the chance of mistakes in the design that have to be corrected later. Similarly, it reduces the chances of mistakes in any implementation(s).

    In particular, there is no need for every piece of metadata to be a part of the core database format.

    (in the case of Git, I would argue that e.g. "author" might as well be a "meta-header")

  • Corruption should be detected early and not propagated. This means there should be good tools to sanity check a database, and ideally some of these checks should be run automatically during everyday operations - e.g. when pushing changes to others or receiving them.

  • If corruption does occur, there should be a way for as much of the database as possible to be recovered.

    A couple of corrupt objects should not render the entire database unusable.

    There should be tools for low-level access of the database, but the format and structure should be also documented well enough for power users to understand it, examine and extract data.

  • No "hard" format changes (where clients /have/ to upgrade to access the new format).

    Not all users will instantly update to the latest and greatest version of the software. The lifecycle of enterprise Linux distributions is long enough that it might take three or four years for the majority of users to upgrade.

  • Keep performance data like indexes in separate files. This makes it possible for older software to still read the data, albeit at a slower pace, and/or generate older format index files.

  • New shards of the database should replicate the entire database if at all possible; having more copies of the data can't hurt if other shards go away or get corrupted.

    Having the data locally available also means users get quicker access to more data.

  • Extensions to the database format that require hard format changes (think e.g. submodules) should only impact databases that actually use those extensions.

  • Leave some room for structured arbitrary metadata, which gets propagated but that not all clients need to be able to understand and can safely ignore.

    (think fields like "Signed-Off-By", "Reviewed-By", "Fixes-Bug", etc) in git commit metadata headers, or the revision metadata fields in Bazaar.

August 22, 2014 06:00 PM

August 19, 2014

Rusty

POLLOUT doesn’t mean write(2) won’t block: Part II

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock
cups-browsed: (1262): fd 6 poll() for write without nonblock
Xorg: (1377): fd 1 select() for write without nonblock
Xorg: (1377): fd 3 select() for write without nonblock
Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

       A file descriptor is considered ready if it is possible to
       perform the corresponding I/O operation (e.g., read(2)) without
       blocking.
       ... those in writefds will be watched to see if a write will
       not block...

And poll():

	POLLOUT
		Writing now will not block.

Man page patches have been submitted…

August 19, 2014 01:57 PM

August 18, 2014

Jelmer

Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

August 18, 2014 09:15 PM

August 15, 2014

Chris

RAGBRAI 2014

The long, straight, and typically empty Iowa roads were crowded with bicycles. We took up both lanes, and anyone foolish enough to drive a car or truck into our midst found themselves moving at about 12mph (19-ish kph).


Here I am at the side of the road sporting my Samba Team jersey (with the old-school logo).

  • 490 miles (788km) over 7 days
  • Five metric centuries (≥100km/day)
  • One century (≥100mi/day)
  • More pie, pork chops, and sports drink consumed than I can measure.

By the way, Iowa is not flat.  It has gently rolling hills, which are beautiful when you are riding in a car, and a constant challenge when you are on a bike.  There’s also the wind…

August 15, 2014 04:48 PM

August 02, 2014

Rusty

ccan/io: revisited

There are numerous C async I/O libraries; tevent being the one I’m most familiar with.  Yet, tevent has a very wide API, and programs using it inevitably descend into “callback hell”.  So I wrote ccan/io.

The idea is that each I/O callback returns a “struct io_plan” which says what I/O to do next, and what callback to call.  Examples are “io_read(buf, len, next, next_arg)” to read a fixed number of bytes, and “io_read_partial(buf, lenp, next, next_arg)” to perform a single read.  You could also write your own, such as pettycoin’s “io_read_packet()” which read a length then allocated and read in the rest of the packet.

This should enable a convenient debug mode: you turn each io_read() etc. into synchronous operations and now you have a nice callchain showing what happened to a file descriptor.  In practice, however, debug was painful to use and a frequent source of bugs inside ccan/io, so I never used it for debugging.

And I became less happy when I used it in anger for pettycoin, but at some point you’ve got to stop procrastinating and start producing code, so I left it alone.

Now I’ve revisited it.   820 insertions(+), 1042 deletions(-) and the code is significantly less hairy, and the API a little simpler.  In particular, writing the normal “read-then-write” loops is still very nice, while doing full duplex I/O is possible, but more complex.  Let’s see if I’m still happy once I’ve merged it into pettycoin…

August 02, 2014 06:58 AM

July 29, 2014

Rusty

Pettycoin Alpha01 Tagged

As all software, it took longer than I expected, but today I tagged the first version of pettycoin.  Now, lots more polish and features, but at least there’s something more than the git repo for others to look at!

July 29, 2014 07:53 AM

Last updated: January 26, 2015 12:00 PM

Donations


Nowadays, the Samba Team needs a dollar instead of pizza ;-)

Beyond Samba

Releases