Samba

Planet Samba

Here you will find the personal blogs of Samba developers (for those that keep them). More information about members can also be found on the Samba Team page.

October 23, 2014

Michael

Powershell Cheat Sheet

Here are a few Powershell commands I used for testing and analysis of SMB3 Multi-Channel:

Get-SmbConnection
Get-SmbMapping
Get-SmbClientNetworkInterface
Get-SmbServerNetworkInterface
Get-SmbMultiChannelConnection [-IncludeNotSelected ]
Update-SmbMultiChannelConnection

Especially the “-IncludeNotSelected” to Get-SmbMultiChannelConnection is
useful when debugging connection problems with Multi-Channel setups.

Interestingly, by appending a filter pipe to these commands, one can
produce more output, e.g.:

Get-SmbMultiChannelConnection | fl * | more

(fl is synonymous Filter-List)
This seems slightly strange at first but is actually quite handy.

October 23, 2014 09:21 PM

October 22, 2014

Michael

Demo of SMB3 multi-channel with Samba

Version 3 of the SMB protocol was introduced by by Microsoft with Windows Server 2012. One of the compelling features is called “multi-channel” and gives the client the possiblitly to bind multiple transport connections to a single SMB session, essentially implementing client-controlled link aggregation at the level of the SMB protocol. The purpose of this is (at least) twofold: Increasing throughput, since the client can spread I/O load across multiple physical network links, and fault-tolerance, because the SMB session will be functional as long as one channel is still functional.

A Samba-implementation of multi-channel is still work in progress, but already rather advanced.

Here is a screencast demo of how that already works with latest WIP code:

Demo of SMB3 Multi-Channel with Samba from Michael Adam on Vimeo.

The original video can also be downloaded from my samba.org space:
download Video

Note on implementation

One of the challenges in Samba’s implementation was the 1:1 correspondence between TCP connections and smbd processes in Samba’s traditional design. Now to avoid the hassle of handling operations spread across multiple processes for one session, maybe even on one file, we implemented a mechanism to transfer a tcp socket from one smbd to another. We don’t transfer the socket in the SessionSetup call that binds the connection as a channel to the existing session, but we already transfer it in the NegotiateProtocol call, i.e. the first SMB request on the new connection. This way, we don’t need to transfer any complicated state, but only the socket. We find the smbd process to pass the connection to based on the ClientGUID, an identifier that a client sends if it speaks SMB 2.1 or newer. So we have effectively established a per-ClientGUID single-process model.

Here is a graphic of how establishing a multi-channel session works in Samba:

Samba Multi-Channel design

Note on performance

At first sight this single-process design might seem to result in a bad performance penalty when compared to samba’s original mechanism of smbd child processes corresponding 1:1 to TCP connections. But this is not the case since the smbd process fans out to multiple CPUs by using a pool of worker threads (pthread_pool) to do the I/O operations (most notably reads and writes).

The code

Some preparations are already upstream in Samba’s master branch, like fd-passing using the new unix-datagram messaging system and the preparation for having multiple TCP connections in one smbd by introduction of the smbXsrv_client structure.

The full code used in the demos can be found in Stefan Metzmacher’s and my master3-multi-channel branches on git.samba.org:

https://git.samba.org/?p=metze/samba/wip.git;a=shortlog;h=refs/heads/master3-multi-channel

https://git.samba.org/?p=obnox/samba/samba-obnox.git;a=shortlog;h=master3-multi-channel

October 22, 2014 10:38 PM

October 21, 2014

Michael

git: rebasing all commits of a branch

I have been searching for this feature a bit, so I note it down here for easier retreival…

Interactive rebasing (git rebase -i) is one of the most awesome things about git. Usually my call pattern is git rebase -i COMMIT1. This will present all commits of the current branch after COMMIT1 in the rebase editor. I.e. with this syntax, you always need the parent commit of the first commit you want to see in rebase. But sometimes one needs to interactively rebase all commits, e.g. when preparing a new branch before publishing it.

After playing with inserting a “x false” as the topmost line in the rebase editor, which works, I now found that git (of course :-) ) has a syntax for this:

git rebase -i --root

Great!

October 21, 2014 11:28 AM

October 07, 2014

Andreas

A talk about cwrap at LinuxCon Europe

Next week is the LinuxCon Europe in Düsseldorf, Germany. I will be there and give a talk about cwrap, a set of tools to make client/server testing easy on a single machine. Testing network applications correctly is hard. This talk will demonstrate how to create a fully isolated network environment for client and server testing on a single host, complete with synthetic account information, hostname resolution, and privilege separation.

I hope you will attend my talk if you are there. If you can’t attend the LinuxCon Europe, but you’re going to the Linux Plumbers Conference then say hello and lets talk about cwrap there!

At the LinuxCon Europe I will announce new cool stuff and the website will be updated. So you should check

http://cwrap.org/

next week!

cwrap talk

flattr this!

October 07, 2014 01:31 PM

October 06, 2014

David

Samba and Snapper: Previous Versions with Windows Explorer

Snapper is a neat application for managing snapshots atop a Btrfs filesystem.

The upcoming release of Samba 4.2 will offer integration with Snapper, providing the ability to expose snapshots to remote Windows clients using the previous versions feature built into Windows Explorer, as demonstrated in the following video:

The feature can be enabled on a per-share basis in smb.conf, e.g.:
...
[share]
vfs objects = snapper
path = /mnt/btrfs_fs

The share path must correspond to a Btrfs subvolume, and have an associated Snapper configuration. Additionally, Snapper must be configured to grant users snapshot access - see the vfs_snapper man page for more details.

Many thanks to:
  • Arvin Schnell and other Snapper developers.
  • My colleagues at SUSE Linux, and fellow Samba Team members, for supporting my implementation efforts.
  • Kdenlive developers, for writing a great video editing suite.

October 06, 2014 08:45 PM

September 04, 2014

Andreas

How to get real DNS resolving in ‘make test’?

As you might know I’m working (hacking) on Samba. Samba has a DNS implementation to easier integrate all the AD features. The problem is we would like to talk to the DNS server but /etc/resolv.conf points to a nameserver so your machine is correctly working in your network environment. For this Samba in our dns resolver library we implemented a way to setup a dns_hosts_file to fake DNS queries. This works well for binaries provided by Samba but not for 3rd party application. As Günther Deschner and I are currently working on MIT Kerberos support the libkrb5 library always complained that it is not able to talk query the DNS server to find the KDC. So it was time to really fix this!

I’ve sat down and did some research how we get this working. After digging through the glibc code, first I thought we could redirect the fopen(“/etc/resolv.conf”) call. Well as this is called in a glibc internal function it directly calls _IO_fopen() which isn’t weak symbol. So I looked deeper and recognized that I have access to the resovler structure which holds the information to the nameserver. I could simply modify this!

It was time to implement another wrapper, resolv_wrapper. Currently it only wraps the functions required by Samba and MIT Kerberos, res_(n)init(), res_(n)close, res_(n)query and res_(n)search. With this I was able to run kinit which asks the DNS server for a SRV record to find the KDC and it worked. With Jakub Hrozek I cleaned up the code yesterday and we created a parser for a resolv.conf file.

Here is a tcpdump of the kinit tool talking to the DNS server with socket_wrapper over IPv6.

resolv_wrapper will be available on cwrap.org soon!

flattr this!

September 04, 2014 07:32 AM

August 19, 2014

Rusty

POLLOUT doesn’t mean write(2) won’t block: Part II

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock
cups-browsed: (1262): fd 6 poll() for write without nonblock
Xorg: (1377): fd 1 select() for write without nonblock
Xorg: (1377): fd 3 select() for write without nonblock
Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

       A file descriptor is considered ready if it is possible to
       perform the corresponding I/O operation (e.g., read(2)) without
       blocking.
       ... those in writefds will be watched to see if a write will
       not block...

And poll():

	POLLOUT
		Writing now will not block.

Man page patches have been submitted…

August 19, 2014 01:57 PM

August 18, 2014

Jelmer

Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

August 18, 2014 09:15 PM

August 15, 2014

Chris

RAGBRAI 2014

The long, straight, and typically empty Iowa roads were crowded with bicycles. We took up both lanes, and anyone foolish enough to drive a car or truck into our midst found themselves moving at about 12mph (19-ish kph).


Here I am at the side of the road sporting my Samba Team jersey (with the old-school logo).

  • 490 miles (788km) over 7 days
  • Five metric centuries (≥100km/day)
  • One century (≥100mi/day)
  • More pie, pork chops, and sports drink consumed than I can measure.

By the way, Iowa is not flat.  It has gently rolling hills, which are beautiful when you are riding in a car, and a constant challenge when you are on a bike.  There’s also the wind…

August 15, 2014 04:48 PM

August 02, 2014

Rusty

ccan/io: revisited

There are numerous C async I/O libraries; tevent being the one I’m most familiar with.  Yet, tevent has a very wide API, and programs using it inevitably descend into “callback hell”.  So I wrote ccan/io.

The idea is that each I/O callback returns a “struct io_plan” which says what I/O to do next, and what callback to call.  Examples are “io_read(buf, len, next, next_arg)” to read a fixed number of bytes, and “io_read_partial(buf, lenp, next, next_arg)” to perform a single read.  You could also write your own, such as pettycoin’s “io_read_packet()” which read a length then allocated and read in the rest of the packet.

This should enable a convenient debug mode: you turn each io_read() etc. into synchronous operations and now you have a nice callchain showing what happened to a file descriptor.  In practice, however, debug was painful to use and a frequent source of bugs inside ccan/io, so I never used it for debugging.

And I became less happy when I used it in anger for pettycoin, but at some point you’ve got to stop procrastinating and start producing code, so I left it alone.

Now I’ve revisited it.   820 insertions(+), 1042 deletions(-) and the code is significantly less hairy, and the API a little simpler.  In particular, writing the normal “read-then-write” loops is still very nice, while doing full duplex I/O is possible, but more complex.  Let’s see if I’m still happy once I’ve merged it into pettycoin…

August 02, 2014 06:58 AM

July 29, 2014

Rusty

Pettycoin Alpha01 Tagged

As all software, it took longer than I expected, but today I tagged the first version of pettycoin.  Now, lots more polish and features, but at least there’s something more than the git repo for others to look at!

July 29, 2014 07:53 AM

July 21, 2014

Andreas

What is preloading?

by Jakub Hrozek and Andreas Schneider

The LD_PRELOAD trick!

Preloading is a feature of the dynamic linker (ld). It is a available on most Unix system and allows to load a user specified, shared library before all other shared libraries which are linked to an executable.

Library pre-loading is most commonly used when you need a custom version of a library function to be called. You might want to implement your own malloc(3) and free(3) functions that would perform a rudimentary leak checking or memory access control for example, or you might want to extend the I/O calls to dump data when reverse engineering a binary blob. In this case, the library to be preloaded would implement the functions you want to override with prelinking. Only functions of dynamically loaded libraries can be overridden. You’re not able to override a function the application implements by itself or links statically with.

The library to preload is defined by the environment variable LD_PRELOAD, such as LD_PRELOAD=libwurst.so. The symbols of the preloaded library are bound first, before other linked shared libraries.
Lets look into symbol binding in more details. If your application calls a function, then the linker looks if it is available in the application itself first. If the symbol is not found, the linker checks all preloaded libraries and only then all the libraries which have been linked to your application. The shared libraries are searched in the order which has been given during compilation and linking. You can find out the linking order by calling 'ldd /path/to/my/applicaton'. If you’re interested how the linker is searching for the symbols it needs or if you want do debug if the symbol of your preloaded library is used correctly, you can do that by enabling tracing in the linker.

A simple example would be 'LD_DEBUG=symbols ls'. You can find more details about debugging with the linker in the manpage: 'man ld.so'.

Example:

Your application uses the function open(2).

  • Your application doesn’t implement it.
  • LD_PRELOAD=libcwrap.so provides open(2).
  • The linked libc.so provides open(2).

=> The open(2) symbol from libcwrap.so gets bound!

The wrappers used for creating complex testing environments of the cwrap project use preloading to supply their own variants of several system or library calls suitable for unit testing of networked software or privilege separation. For example, one wrapper includes its version of most of the standard API used to communicate over sockets that routes the communication over local sockets.

flattr this!

July 21, 2014 10:38 AM

July 17, 2014

Rusty

API Bug of the Week: getsockname().

A “non-blocking” IPv6 connect() call was in fact, blocking.  Tracking that down made me realize the IPv6 address was mostly random garbage, which was caused by this function:

bool get_fd_addr(int fd, struct protocol_net_address *addr)
{
   union {
      struct sockaddr sa;
      struct sockaddr_in in;
      struct sockaddr_in6 in6;
   } u;
   socklen_t len = sizeof(len);
   if (getsockname(fd, &u.sa, &len) != 0)
      return false;
   ...
}

The bug: “sizeof(len)” should be “sizeof(u)”.  But when presented with a too-short length, getsockname() truncates, and otherwise “succeeds”; you have to check the resulting len value to see what you should have passed.

Obviously an error return would be better here, but the writable len arg is pretty useless: I don’t know of any callers who check the length return and do anything useful with it.  Provide getsocklen() for those who do care, and have getsockname() take a size_t as its third arg.

Oh, and the blocking?  That was because I was calling “fcntl(fd, F_SETFD, …)” instead of “F_SETFL”!

July 17, 2014 03:31 AM

July 09, 2014

Andreas

Samba AD DC in Fedora and RHEL

Several people asked me about the status about the Active Directory Domain Controller support of Samba in Fedora. As Fedora and RHEL are using MIT Kerberos implementation as its Kerberos infrastructure of choice, the Samba Active Directory Domain Controller implementation is not available with MIT Kereberos at the moment. But we are working on it!

Günther Deschner and I gave a talk at the SambaXP conference about our development efforts in this direction:

The road to MIT KRB5 support

I hope this helps to understand that this is a huge task.

flattr this!

July 09, 2014 09:05 AM

July 02, 2014

David

Samba Server-Side Copy Offload

I recently implemented server-side copy offload support for Samba 4.1, along with Btrfs filesystem specific enhancements. This video compares server-side copy performance with traditional copy methods.


A few notes on the demonstration:
  • The Windows Server 2012 client and Samba server are connected via an old 100 Mb/s switch, which obviously acts as a network throughput bottleneck in the traditional copy demonstration.
  • The Samba server resembles the 4.1.0 code-base, but includes an extra patch to disable server-side copy requests on the regular share.

Many thanks to:
  • My colleagues at SUSE Linux, for supporting my implementation efforts.
  • The Samba Team, particularly Metze and Jeremy, for reviewing the code.
  • Kdenlive developers, for writing a great video editing suite.

Update (July, 2014): Usage is now fully documented on the Samba Wiki.

    July 02, 2014 06:21 PM

    June 21, 2014

    Rusty

    Alternate Blog for my Pettycoin Work

    I decided to use github for pettycoin, and tested out their blogging integration (summary: it’s not very integrated, but once set up, Jekyll is nice).  I’m keeping a blow-by-blow development blog over there.

    June 21, 2014 12:14 AM

    June 16, 2014

    Rusty

    Rusty Goes on Sabbatical, June to December

    At linux.conf.au I spoke about my pre-alpha implementation of Pettycoin, but progress since then has been slow.  That’s partially due to yak shaving (like rewriting ccan/io library), partially reimplementation of parts I didn’t like, and partially due to the birth of my son, but mainly because I have a day job which involves working on Power 8 KVM issues for IBM.  So Alex convinced me to take 6 months off from the day job, and work 4 days a week on pettycoin.

    I’m going to be blogging my progress, so expect several updates a week.  The first few alpha releases will be useless for doing any actual transactions, but by the first beta the major pieces should be in place…

    June 16, 2014 08:50 AM

    June 11, 2014

    David

    Using the Azure File Service on Linux

    The Microsoft Azure File Service is a new SMB shared-storage service offered on the Microsoft Azure public cloud.

    The service allows for the instant provisioning of file shares for private access by cloud provisioned VMs using the SMB 2.1 protocol, and additionally supports public access via a new REST interface.



    Linux VMs deployed on Azure can make use of this service using the Linux Kernel CIFS client. The kernel client must be configured to support and use the SMB 2.1 protocol dialect:
    • CONFIG_CIFS_SMB2 must be enabled in the kernel configuration at build time
      • Use
        # zcat /proc/config.gz | grep CONFIG_CIFS_SMB2
        to check this on a running system.
    • The vers=2.1 mount.cifs parameter must be provided at mount time.
    • Furthermore, the Azure storage account and access key must be provided as username and password.

    # mount.cifs -o vers=2.1,user=smb //smb.file.core.windows.net/share /share/
    Password for smb@//smb.file.core.windows.net/share: ******...
    # df -h /share/
    Filesystem Size Used Avail Use% Mounted on
    //smb.file.core.windows.net/share 5.0T 0 5.0T 0% /share

    This feature will be supported with the upcoming release of SUSE Linux Enterprise Server 12, and future openSUSE releases.

    Disclaimer: I work in the Labs department at SUSE.

    June 11, 2014 09:13 PM

    June 07, 2014

    Rusty

    Donation to Jupiter Broadcasting

    Chris Fisher’s Jupiter Broadcasting pod/vodcasting started 8 years ago with the Linux Action Show: still their flagship show, and how I discovered them 3 years ago.  Shows like this give access to FOSS to those outside the LWN-reading crowd; community building can be a thankless task, and as a small shop Chris has had ups and downs along the way.  After listening to them for a few years, I feel a weird bond with this bunch of people I’ve never met.

    I regularly listen to Techsnap for security news, Scibyte for science with my daughter, and Unfilter to get an insight into the NSA and what the US looks like from the inside.  I bugged Chris a while back to accept bitcoin donations, and when they did I subscribed to Unfilter for a year at 2 BTC.  To congratulate them on reaching the 100th Unfilter episode, I repeated that donation.

    They’ve started doing new and ambitious things, like Linux HOWTO, so I know they’ll put the funds to good use!

    June 07, 2014 11:45 PM

    June 02, 2014

    Andreas

    New features in socket_wrapper 1.1.0

    Maybe you already heard of the cwrap project. A set of tools to create a fully isolated network environment to test client/server components on a single host. socket_wrapper is a part of cwrap and I released version 1.1.0 today. In this release I worked together with Michael Adam and we implemented some nice new features like support for IP_PKTINFO for binding on UDP sockets, bindresvport() and more socket options via getsockopt(). This was mostly needed to be able to create a test environment for MIT Kerberos.

    The upcoming features for the next version are support for passing file description between processes using a unix domain socket and sendmsg()/recvmsg() (SCM_RIGHTS). We would also like to make socket_wrapper thread-safe.

    flattr this!

    June 02, 2014 02:36 PM

    Last updated: October 26, 2014 08:00 AM

    Donations


    Nowadays, the Samba Team needs a dollar instead of pizza ;-)

    Beyond Samba

    Releases