Planet Samba

Here you will find the personal blogs of Samba developers (for those that keep them). More information about members can also be found on the Samba Team page.

December 19, 2014


tmux with screen-like key-bindings

I recently starting switching from screen to tmux for my daily workflow, partly triggered by the increasing use of tmate for pair-programming sessions.

For that purpose I wanted the key-bindings to be as similar as possible to the ones I am used to from screen, which mostly involving the prefix (hotkey) from Ctrl-b to Ctrl-a. This is achieved in the awesome tmux configuration file ~/.tmux.conf by

set-option -g prefix C-a

Now in screen, you can send the hotkey through to the application by typing Ctrl-a followed by a plain a. I use this frequently, e.g. for sending Ctrl-a to the shell prompt instead of pos1. Tmux offers the send-prefix command specifically for this purpose, which can be bound do a key. My ~/.tmux.conf file already contained

bind-key a send-prefix

According to the tmux manual page, this complete snippet should make it work:

set-option -g prefix C-a
unbind-key C-b
bind-key C-a send-prefix

but it was not working for me! :-(

Entering the command bind-key C-a send-prefix in tmux interactively (command prompt triggered with Ctrl-a :) worked though. After experimenting a bit, I found out that the order of options seems to matter here: Once I put the bind-key a send-prefix line before the set -g prefix C-a one, it started working. So here is the complete snippet that works for me:

bind-key C-a send-prefix
unbind-key C-b
set-option -g C-a

I am not sure whether this is some speciality with my new fedora. On a debian system, the problem does not seem to exist...

In order to complete the screen-like mapping of Ctrl-a, let me mention that bind-key C-a last-window lets double Ctrl-a toggle between the two most recent tmux session windows. So here is the complete part of my config with respect to setting Ctrl-a as the hot-key:

bind-key C-a send-prefix
unbind-key C-b
set-option -g prefix C-a
bind-key C-a last-window

Note that the placement of the last-window setting does not make a difference.

December 19, 2014 11:27 AM

December 10, 2014


Accénts & Ümlauts - A Custom Keyboard Layout on Linux

table#t01 { width: 100%; border: 1px solid black; border-collapse: collapse; } table#t01 td { padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } table#t01 tr:nth-child(even) { background-color: #eee; } table#t01 tr:nth-child(odd) { background-color: #fff; } table#t01 th { background-color: black; color: white; padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } As a native English speaker living in Germany, I need to be able to reach the full Germanic alphabet without using long key combinations or (gasp) resorting to a German keyboard layout.

Accents and umlauts on US keyboards aren't only useful for expats. They're also enjoyed (or abused) by a number of English speaking subcultures:

  • Hipsters: "This is such a naïve café."
  • Metal heads: "Did you hear Spın̈al Tap are touring with Motörhead this year?"
  • Teenage gamers: "über pwnage!"

The standard US system keyboard layout can be enhanced to offer German characters via the following key mappings:
Key Key + Shift Key + AltGr (Right Alt) Key + AltGr + Shift
e E é É
u U ü Ü
o O ö Ö
a A ä Ä
s S ß ß
5 %

With openSUSE 13.2, this can be configured by first defining the mappings in /usr/share/X11/xkb/symbols/us_de:
partial default alphanumeric_keys
xkb_symbols "basic" {
name[Group1]= "US/ASCII";
include "us"

key <AD03> {[e, E, eacute, Eacute]};
key <AD07> {[u, U, udiaeresis, Udiaeresis]};
key <AD09> {[o, O, odiaeresis, Odiaeresis]};
key <AC01> {[a, A, adiaeresis, Adiaeresis]};
key <AC02> {[s, S, ssharp, ssharp]};
key <AE05> {[NoSymbol, NoSymbol, EuroSign]};

key <RALT> {type[Group1]="TWO_LEVEL",
[ISO_Level3_Shift, ISO_Level3_Shift]};

modifier_map Mod5 {<RALT>};

Secondly, specify the keyboard layout as the system default in /etc/X11/xorg.conf.d/00-keyboard.conf:
Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "us_de"

Achtung!: IBus may be configured to override the system keyboard layout - ensure this is not the case in Ibus Preferences:
Once in place, the key mappings can be easily modified to suit specific tastes or languages - viel Spaß!

December 10, 2014 08:56 PM

October 30, 2014


A nice little chat on IRC

Day changed to 30 Okt 2014
(13:09) <metze> 217b4b60fd602bad569e7e06adabe308e23c5807
(13:22) <obnox> 105724073300af03eb0835b3c93d9b2e2bfacb07
(13:22) <obnox> rw = brl_get_locks_internal(talloc_tos(), fsp, false);
(13:28) <obnox> 5d8f64c47d02c2aa58f3f0c87903bbd41d086aa0
(13:30) <metze> 77d2c5af511d60b3437b9cfa2113283ed2aa6194
(13:33) <obnox> 29bf737
(13:34) <obnox> d0290a9
(13:34) <obnox> 4e9b4a5
(13:34) <obnox> 7a6cf35
(13:53) <metze> dd33241d29122721edd35e2d53992939ef556c1a
(13:57) <obnox> 5633861c4b6f22b09a9fa0bf91fdeb43b4f3558b


October 30, 2014 03:44 PM

October 29, 2014


resolv_wrapper 1.0.0 – the new cwrap tool

I’ve released a new preloadable wrapper named resolv_wrapper which can be used for nameserver redirection or DNS response faking. It can be used in testing environment to route DNS queries to a real nameserver separate from resolv.conf or fake one with simple config file. We tested it on Linux, FreeBSD and Solaris. It should work on other UNIX flavors too.


You can download resolv_wrapper here.

flattr this!

October 29, 2014 11:24 AM

October 23, 2014


Powershell Cheat Sheet

Here are a few Powershell commands I used for testing and analysis of SMB3 Multi-Channel:

Get-SmbMultiChannelConnection [-IncludeNotSelected ]

Especially the “-IncludeNotSelected” to Get-SmbMultiChannelConnection is
useful when debugging connection problems with Multi-Channel setups.

Interestingly, by appending a filter pipe to these commands, one can
produce more output, e.g.:

Get-SmbMultiChannelConnection | fl * | more

(fl is synonymous Filter-List)
This seems slightly strange at first but is actually quite handy.

October 23, 2014 09:21 PM

October 22, 2014


Demo of SMB3 multi-channel with Samba

Version 3 of the SMB protocol was introduced by by Microsoft with Windows Server 2012. One of the compelling features is called “multi-channel” and gives the client the possiblitly to bind multiple transport connections to a single SMB session, essentially implementing client-controlled link aggregation at the level of the SMB protocol. The purpose of this is (at least) twofold: Increasing throughput, since the client can spread I/O load across multiple physical network links, and fault-tolerance, because the SMB session will be functional as long as one channel is still functional.

A Samba-implementation of multi-channel is still work in progress, but already rather advanced.

Here is a screencast demo of how that already works with latest WIP code:

Demo of SMB3 Multi-Channel with Samba from Michael Adam on Vimeo.

The original video can also be downloaded from my space:
download Video

Note on implementation

One of the challenges in Samba’s implementation was the 1:1 correspondence between TCP connections and smbd processes in Samba’s traditional design. Now to avoid the hassle of handling operations spread across multiple processes for one session, maybe even on one file, we implemented a mechanism to transfer a tcp socket from one smbd to another. We don’t transfer the socket in the SessionSetup call that binds the connection as a channel to the existing session, but we already transfer it in the NegotiateProtocol call, i.e. the first SMB request on the new connection. This way, we don’t need to transfer any complicated state, but only the socket. We find the smbd process to pass the connection to based on the ClientGUID, an identifier that a client sends if it speaks SMB 2.1 or newer. So we have effectively established a per-ClientGUID single-process model.

Here is a graphic of how establishing a multi-channel session works in Samba:

Samba Multi-Channel design

Note on performance

At first sight this single-process design might seem to result in a bad performance penalty when compared to samba’s original mechanism of smbd child processes corresponding 1:1 to TCP connections. But this is not the case since the smbd process fans out to multiple CPUs by using a pool of worker threads (pthread_pool) to do the I/O operations (most notably reads and writes).

The code

Some preparations are already upstream in Samba’s master branch, like fd-passing using the new unix-datagram messaging system and the preparation for having multiple TCP connections in one smbd by introduction of the smbXsrv_client structure.

The full code used in the demos can be found in Stefan Metzmacher’s and my master3-multi-channel branches on;a=shortlog;h=refs/heads/master3-multi-channel;a=shortlog;h=master3-multi-channel

October 22, 2014 10:38 PM

October 21, 2014


git: rebasing all commits of a branch

I have been searching for this feature a bit, so I note it down here for easier retreival…

Interactive rebasing (git rebase -i) is one of the most awesome things about git. Usually my call pattern is git rebase -i COMMIT1. This will present all commits of the current branch after COMMIT1 in the rebase editor. I.e. with this syntax, you always need the parent commit of the first commit you want to see in rebase. But sometimes one needs to interactively rebase all commits, e.g. when preparing a new branch before publishing it.

After playing with inserting a “x false” as the topmost line in the rebase editor, which works, I now found that git (of course :-) ) has a syntax for this:

git rebase -i --root


October 21, 2014 11:28 AM

October 07, 2014


A talk about cwrap at LinuxCon Europe

Next week is the LinuxCon Europe in Düsseldorf, Germany. I will be there and give a talk about cwrap, a set of tools to make client/server testing easy on a single machine. Testing network applications correctly is hard. This talk will demonstrate how to create a fully isolated network environment for client and server testing on a single host, complete with synthetic account information, hostname resolution, and privilege separation.

I hope you will attend my talk if you are there. If you can’t attend the LinuxCon Europe, but you’re going to the Linux Plumbers Conference then say hello and lets talk about cwrap there!

At the LinuxCon Europe I will announce new cool stuff and the website will be updated. So you should check

next week!

cwrap talk

flattr this!

October 07, 2014 01:31 PM

October 06, 2014


Samba and Snapper: Previous Versions with Windows Explorer

Snapper is a neat application for managing snapshots atop a Btrfs filesystem.

The upcoming release of Samba 4.2 will offer integration with Snapper, providing the ability to expose snapshots to remote Windows clients using the previous versions feature built into Windows Explorer, as demonstrated in the following video:

The feature can be enabled on a per-share basis in smb.conf, e.g.:
vfs objects = snapper
path = /mnt/btrfs_fs

The share path must correspond to a Btrfs subvolume, and have an associated Snapper configuration. Additionally, Snapper must be configured to grant users snapshot access - see the vfs_snapper man page for more details.

Many thanks to:
  • Arvin Schnell and other Snapper developers.
  • My colleagues at SUSE Linux, and fellow Samba Team members, for supporting my implementation efforts.
  • Kdenlive developers, for writing a great video editing suite.

October 06, 2014 08:45 PM

September 04, 2014


How to get real DNS resolving in ‘make test’?

As you might know I’m working (hacking) on Samba. Samba has a DNS implementation to easier integrate all the AD features. The problem is we would like to talk to the DNS server but /etc/resolv.conf points to a nameserver so your machine is correctly working in your network environment. For this Samba in our dns resolver library we implemented a way to setup a dns_hosts_file to fake DNS queries. This works well for binaries provided by Samba but not for 3rd party application. As Günther Deschner and I are currently working on MIT Kerberos support the libkrb5 library always complained that it is not able to talk query the DNS server to find the KDC. So it was time to really fix this!

I’ve sat down and did some research how we get this working. After digging through the glibc code, first I thought we could redirect the fopen(“/etc/resolv.conf”) call. Well as this is called in a glibc internal function it directly calls _IO_fopen() which isn’t weak symbol. So I looked deeper and recognized that I have access to the resovler structure which holds the information to the nameserver. I could simply modify this!

It was time to implement another wrapper, resolv_wrapper. Currently it only wraps the functions required by Samba and MIT Kerberos, res_(n)init(), res_(n)close, res_(n)query and res_(n)search. With this I was able to run kinit which asks the DNS server for a SRV record to find the KDC and it worked. With Jakub Hrozek I cleaned up the code yesterday and we created a parser for a resolv.conf file.

Here is a tcpdump of the kinit tool talking to the DNS server with socket_wrapper over IPv6.

resolv_wrapper will be available on soon!

flattr this!

September 04, 2014 07:32 AM

August 22, 2014


Autonomous Shard Distributed Databases

Distributed databases are hard. Distributed databases where you don't have full control over what shards run which version of your software are even harder, because it becomes near impossible to deal with fallout when things go wrong. For lack of a better term (is there one?), I'll refer to these databases as Autonomous Shard Distributed Databases.

Distributed version control systems are an excellent example of such databases. They store file revisions and commit metadata in shards ("repositories") controlled by different people.

Because of the nature of these systems, it is hard to weed out corrupt data if all shards ignorantly propagate broken data. There will be different people on different platforms running the database software that manages the individual shards.

This makes it hard - if not impossible - to deploy software updates to all shards of a database in a reasonable amount of time (though a Chrome-like update mechanism might help here, if that was acceptable). This has consequences for the way in which you have to deal with every change to the database format and model.

(e.g. imagine introducing a modification to the Linux kernel Git repository that required everybody to install a new version of Git).

Defensive programming and a good format design from the start are essential.

Git and its database format do really well in all of these regards. As I wrote in my retrospective, Bazaar has made a number of mistakes in this area, and that was a major source of user frustration.

I propose that every autonomous shard distributed databases should aim for the following:

  • For the "base" format, keep it as simple as you possibly can. (KISS)

    The simpler the format, the smaller the chance of mistakes in the design that have to be corrected later. Similarly, it reduces the chances of mistakes in any implementation(s).

    In particular, there is no need for every piece of metadata to be a part of the core database format.

    (in the case of Git, I would argue that e.g. "author" might as well be a "meta-header")

  • Corruption should be detected early and not propagated. This means there should be good tools to sanity check a database, and ideally some of these checks should be run automatically during everyday operations - e.g. when pushing changes to others or receiving them.

  • If corruption does occur, there should be a way for as much of the database as possible to be recovered.

    A couple of corrupt objects should not render the entire database unusable.

    There should be tools for low-level access of the database, but the format and structure should be also documented well enough for power users to understand it, examine and extract data.

  • No "hard" format changes (where clients /have/ to upgrade to access the new format).

    Not all users will instantly update to the latest and greatest version of the software. The lifecycle of enterprise Linux distributions is long enough that it might take three or four years for the majority of users to upgrade.

  • Keep performance data like indexes in separate files. This makes it possible for older software to still read the data, albeit at a slower pace, and/or generate older format index files.

  • New shards of the database should replicate the entire database if at all possible; having more copies of the data can't hurt if other shards go away or get corrupted.

    Having the data locally available also means users get quicker access to more data.

  • Extensions to the database format that require hard format changes (think e.g. submodules) should only impact databases that actually use those extensions.

  • Leave some room for structured arbitrary metadata, which gets propagated but that not all clients need to be able to understand and can safely ignore.

    (think fields like "Signed-Off-By", "Reviewed-By", "Fixes-Bug", etc) in git commit metadata headers, or the revision metadata fields in Bazaar.

August 22, 2014 06:00 PM

August 19, 2014


POLLOUT doesn’t mean write(2) won’t block: Part II

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock
cups-browsed: (1262): fd 6 poll() for write without nonblock
Xorg: (1377): fd 1 select() for write without nonblock
Xorg: (1377): fd 3 select() for write without nonblock
Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

       A file descriptor is considered ready if it is possible to
       perform the corresponding I/O operation (e.g., read(2)) without
       ... those in writefds will be watched to see if a write will
       not block...

And poll():

		Writing now will not block.

Man page patches have been submitted…

August 19, 2014 01:57 PM

August 18, 2014


Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.


Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.


Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

August 18, 2014 09:15 PM

August 15, 2014



The long, straight, and typically empty Iowa roads were crowded with bicycles. We took up both lanes, and anyone foolish enough to drive a car or truck into our midst found themselves moving at about 12mph (19-ish kph).

Here I am at the side of the road sporting my Samba Team jersey (with the old-school logo).

  • 490 miles (788km) over 7 days
  • Five metric centuries (≥100km/day)
  • One century (≥100mi/day)
  • More pie, pork chops, and sports drink consumed than I can measure.

By the way, Iowa is not flat.  It has gently rolling hills, which are beautiful when you are riding in a car, and a constant challenge when you are on a bike.  There’s also the wind…

August 15, 2014 04:48 PM

August 02, 2014


ccan/io: revisited

There are numerous C async I/O libraries; tevent being the one I’m most familiar with.  Yet, tevent has a very wide API, and programs using it inevitably descend into “callback hell”.  So I wrote ccan/io.

The idea is that each I/O callback returns a “struct io_plan” which says what I/O to do next, and what callback to call.  Examples are “io_read(buf, len, next, next_arg)” to read a fixed number of bytes, and “io_read_partial(buf, lenp, next, next_arg)” to perform a single read.  You could also write your own, such as pettycoin’s “io_read_packet()” which read a length then allocated and read in the rest of the packet.

This should enable a convenient debug mode: you turn each io_read() etc. into synchronous operations and now you have a nice callchain showing what happened to a file descriptor.  In practice, however, debug was painful to use and a frequent source of bugs inside ccan/io, so I never used it for debugging.

And I became less happy when I used it in anger for pettycoin, but at some point you’ve got to stop procrastinating and start producing code, so I left it alone.

Now I’ve revisited it.   820 insertions(+), 1042 deletions(-) and the code is significantly less hairy, and the API a little simpler.  In particular, writing the normal “read-then-write” loops is still very nice, while doing full duplex I/O is possible, but more complex.  Let’s see if I’m still happy once I’ve merged it into pettycoin…

August 02, 2014 06:58 AM

July 29, 2014


Pettycoin Alpha01 Tagged

As all software, it took longer than I expected, but today I tagged the first version of pettycoin.  Now, lots more polish and features, but at least there’s something more than the git repo for others to look at!

July 29, 2014 07:53 AM

July 21, 2014


What is preloading?

by Jakub Hrozek and Andreas Schneider

The LD_PRELOAD trick!

Preloading is a feature of the dynamic linker (ld). It is a available on most Unix system and allows to load a user specified, shared library before all other shared libraries which are linked to an executable.

Library pre-loading is most commonly used when you need a custom version of a library function to be called. You might want to implement your own malloc(3) and free(3) functions that would perform a rudimentary leak checking or memory access control for example, or you might want to extend the I/O calls to dump data when reverse engineering a binary blob. In this case, the library to be preloaded would implement the functions you want to override with prelinking. Only functions of dynamically loaded libraries can be overridden. You’re not able to override a function the application implements by itself or links statically with.

The library to preload is defined by the environment variable LD_PRELOAD, such as The symbols of the preloaded library are bound first, before other linked shared libraries.
Lets look into symbol binding in more details. If your application calls a function, then the linker looks if it is available in the application itself first. If the symbol is not found, the linker checks all preloaded libraries and only then all the libraries which have been linked to your application. The shared libraries are searched in the order which has been given during compilation and linking. You can find out the linking order by calling 'ldd /path/to/my/applicaton'. If you’re interested how the linker is searching for the symbols it needs or if you want do debug if the symbol of your preloaded library is used correctly, you can do that by enabling tracing in the linker.

A simple example would be 'LD_DEBUG=symbols ls'. You can find more details about debugging with the linker in the manpage: 'man'.


Your application uses the function open(2).

  • Your application doesn’t implement it.
  • provides open(2).
  • The linked provides open(2).

=> The open(2) symbol from gets bound!

The wrappers used for creating complex testing environments of the cwrap project use preloading to supply their own variants of several system or library calls suitable for unit testing of networked software or privilege separation. For example, one wrapper includes its version of most of the standard API used to communicate over sockets that routes the communication over local sockets.

flattr this!

July 21, 2014 10:38 AM

July 17, 2014


API Bug of the Week: getsockname().

A “non-blocking” IPv6 connect() call was in fact, blocking.  Tracking that down made me realize the IPv6 address was mostly random garbage, which was caused by this function:

bool get_fd_addr(int fd, struct protocol_net_address *addr)
   union {
      struct sockaddr sa;
      struct sockaddr_in in;
      struct sockaddr_in6 in6;
   } u;
   socklen_t len = sizeof(len);
   if (getsockname(fd, &, &len) != 0)
      return false;

The bug: “sizeof(len)” should be “sizeof(u)”.  But when presented with a too-short length, getsockname() truncates, and otherwise “succeeds”; you have to check the resulting len value to see what you should have passed.

Obviously an error return would be better here, but the writable len arg is pretty useless: I don’t know of any callers who check the length return and do anything useful with it.  Provide getsocklen() for those who do care, and have getsockname() take a size_t as its third arg.

Oh, and the blocking?  That was because I was calling “fcntl(fd, F_SETFD, …)” instead of “F_SETFL”!

July 17, 2014 03:31 AM

July 09, 2014


Samba AD DC in Fedora and RHEL

Several people asked me about the status about the Active Directory Domain Controller support of Samba in Fedora. As Fedora and RHEL are using MIT Kerberos implementation as its Kerberos infrastructure of choice, the Samba Active Directory Domain Controller implementation is not available with MIT Kereberos at the moment. But we are working on it!

Günther Deschner and I gave a talk at the SambaXP conference about our development efforts in this direction:

The road to MIT KRB5 support

I hope this helps to understand that this is a huge task.

flattr this!

July 09, 2014 09:05 AM

July 02, 2014


Samba Server-Side Copy Offload

I recently implemented server-side copy offload support for Samba 4.1, along with Btrfs filesystem specific enhancements. This video compares server-side copy performance with traditional copy methods.

A few notes on the demonstration:
  • The Windows Server 2012 client and Samba server are connected via an old 100 Mb/s switch, which obviously acts as a network throughput bottleneck in the traditional copy demonstration.
  • The Samba server resembles the 4.1.0 code-base, but includes an extra patch to disable server-side copy requests on the regular share.

Many thanks to:
  • My colleagues at SUSE Linux, for supporting my implementation efforts.
  • The Samba Team, particularly Metze and Jeremy, for reviewing the code.
  • Kdenlive developers, for writing a great video editing suite.

Update (July, 2014): Usage is now fully documented on the Samba Wiki.

    July 02, 2014 06:21 PM

    Last updated: December 19, 2014 02:00 PM


    Nowadays, the Samba Team needs a dollar instead of pizza ;-)

    Beyond Samba