Samba

Planet Samba

Here you will find the personal blogs of Samba developers (for those that keep them). More information about members can also be found on the Samba Team page.

May 25, 2015

David

Using the Azure File Service on Linux

The Microsoft Azure File Service is a new SMB shared-storage service offered on the Microsoft Azure public cloud.

The service allows for the instant provisioning of file shares for private access by cloud provisioned VMs using the SMB 2.1 protocol, and additionally supports public access via a new REST interface.

Update 2015-05-25: File shares can now also be provisioned from Linux using Elasto.



Linux VMs deployed on Azure can make use of this service using the Linux Kernel CIFS client. The kernel client must be configured to support and use the SMB 2.1 protocol dialect:
  • CONFIG_CIFS_SMB2 must be enabled in the kernel configuration at build time
    • Use
      # zcat /proc/config.gz | grep CONFIG_CIFS_SMB2
      to check this on a running system.
  • The vers=2.1 mount.cifs parameter must be provided at mount time.
  • Furthermore, the Azure storage account and access key must be provided as username and password.

# mount.cifs -o vers=2.1,user=smb //smb.file.core.windows.net/share /share/
Password for smb@//smb.file.core.windows.net/share: ******...
# df -h /share/
Filesystem Size Used Avail Use% Mounted on
//smb.file.core.windows.net/share 5.0T 0 5.0T 0% /share

This feature will be supported with the upcoming release of SUSE Linux Enterprise Server 12, and future openSUSE releases.

Disclaimer: I work in the Labs department at SUSE.

May 25, 2015 02:14 PM

May 23, 2015

David

Azure File Service IO with Elasto on Linux

In an earlier post I described the basics of the Microsoft Azure File Service, and how it can be used on Linux with the cifs.ko kernel client.

Since that time I've been hacking away on the Elasto cloud storage client, to the point that it now (with version 0.6.0) supports Azure File Service share provisioning as well as file and directory IO.


To play with Elasto yourself:
  • Install the packages
  • Download your Azure PublishSettings credentials
  • Run
    elasto_cli -s Azure_PublishSettings_File -u afs://
Keep in mind that Elasto is still far from mature, so don't be surprised if it corrupts your data or causes a fire.
With the warning out of the way, I'd like to thank:
  • My employer SUSE Linux, for supporting my Elasto development efforts during Hack Week.
  • Samba Experience conference organisers, for giving me the chance to talk about the project.
  • Kdenlive developers, for writing great video editing software.

May 23, 2015 11:12 PM

April 30, 2015

Rusty

Some bitcoin mempool data: first look

Previously I discussed the use of IBLTs (on the pettycoin blog).  Kalle and I got some interesting, but slightly different results; before I revisited them I wanted some real data to play with.

Finally, a few weeks ago I ran 4 nodes for a week, logging incoming transactions and the contents of the mempools when we saw a block.  This gives us some data to chew on when tuning any fast block sync mechanism; here’s my first impressions looking a the data (which is available on github).

These graphs are my first look; in blue is the number of txs in the block, and in purple stacked on top is the number of txs which were left in the mempool after we took those away.

The good news is that all four sites are very similar; there’s small variance across these nodes (three are in Digital Ocean data centres and one is behind two NATs and a wireless network at my local coworking space).

The bad news is that there are spikes of very large mempools around block 352,800; a series of 731kb blocks which I’m guessing is some kind of soft limit for some mining software [EDIT: 750k is the default soft block limit; reported in 1024-byte quantities as blockchain.info does, this is 732k.  Thanks sipa!].  Our ability to handle this case will depend very much on heuristics for guessing which transactions are likely candidates to be in the block at all (I’m hoping it’s as simple as first-seen transactions are most likely, but I haven’t tested yet).

Transactions in Mempool and in Blocks: Australia (poor connection)

Transactions in Mempool and in Blocks: Singapore

Transactions in Mempool and in Blocks: San Francisco

Transactions in Mempool and in Blocks: San Francisco (using Relay Network)

April 30, 2015 12:26 PM

April 14, 2015

Chris

Samba in the Wild

Every now and again, Samba shows up somewhere unexpected.
Here’s it is on a sidewalk:

Samba on the SidewalkHere it is again at a restaurant:

April 14, 2015 07:01 PM

April 08, 2015

Rusty

Lightning Networks Part IV: Summary

This is the fourth part of my series of posts explaining the bitcoin Lightning Networks 0.5 draft paper.  See Part I, Part II and Part III.

The key revelation of the paper is that we can have a network of arbitrarily complicated transactions, such that they aren’t on the blockchain (and thus are fast, cheap and extremely scalable), but at every point are ready to be dropped onto the blockchain for resolution if there’s a problem.  This is genuinely revolutionary.

It also vindicates Satoshi’s insistence on the generality of the Bitcoin scripting system.  And though it’s long been suggested that bitcoin would become a clearing system on which genuine microtransactions would be layered, it was unclear that we were so close to having such a system in bitcoin already.

Note that the scheme requires some solution to malleability to allow chains of transactions to be built (this is a common theme, so likely to be mitigated in a future soft fork), but Gregory Maxwell points out that it also wants selective malleability, so transactions can be replaced without invalidating the HTLCs which are spending their outputs.  Thus it proposes new signature flags, which will require active debate, analysis and another soft fork.

There is much more to discover in the paper itself: recommendations for lightning network routing, the node charging model, a risk summary, the specifics of the softfork changes, and more.

I’ll leave you with a brief list of requirements to make Lightning Networks a reality:

  1. A soft-fork is required, to protect against malleability and to allow new signature modes.
  2. A new peer-to-peer protocol needs to be designed for the lightning network, including routing.
  3. Blame and rating systems are needed for lightning network nodes.  You don’t have to trust them, but it sucks if they go down as your money is probably stuck until the timeout.
  4. More refinements (eg. relative OP_CHECKLOCKTIMEVERIFY) to simplify and tighten timeout times.
  5. Wallets need to learn to use this, with UI handling of things like timeouts and fallbacks to the bitcoin network (sorry, your transaction failed, you’ll get your money back in N days).
  6. You need to be online every 40 days to check that an old HTLC hasn’t leaked, which will require some alternate solution for occasional users (shut down channel, have some third party, etc).
  7. A server implementation needs to be written.

That’s a lot of work!  But it’s all simply engineering from here, just as bitcoin was once the paper was released.  I look forward to seeing it happen (and I’m confident it will).

April 08, 2015 03:59 AM

April 06, 2015

Rusty

Lightning Networks Part III: Channeling Contracts

This is the third part of my series of posts explaining the bitcoin Lightning Networks 0.5 draft paper.

In Part I I described how a Poon-Dryja channel uses a single in-blockchain transaction to create off-blockchain transactions which can be safely updated by either party (as long as both agree), with fallback to publishing the latest versions to the blockchain if something goes wrong.

In Part II I described how Hashed Timelocked Contracts allow you to safely make one payment conditional upon another, so payments can be routed across untrusted parties using a series of transactions with decrementing timeout values.

Now we’ll join the two together: encapsulate Hashed Timelocked Contracts inside a channel, so they don’t have to be placed in the blockchain (unless something goes wrong).

Revision: Why Poon-Dryja Channels Work

Here’s half of a channel setup between me and you where I’m paying you 1c: (there’s always a mirror setup between you and me, so it’s symmetrical)

Half a channel: we will invalidate transaction 1 (in favour of a new transaction 2) to send funds.

The system works because after we agree on a new transaction (eg. to pay you another 1c), you revoke this by handing me your private keys to unlock that 1c output.  Now if you ever released Transaction 1, I can spend both the outputs.  If we want to add a new output to Transaction 1, we need to be able to make it similarly stealable.

Adding a 1c HTLC Output To Transaction 1 In The Channel

I’m going to send you 1c now via a HTLC (which means you’ll only get it if the riddle is answered; if it times out, I get the 1c back).  So we replace transaction 1 with transaction 2, which has three outputs: $9.98 to me, 1c to you, and 1c to the HTLC: (once we agree on the new transactions, we invalidate transaction 1 as detailed in Part I)

Our Channel With an Output for an HTLC

Note that you supply another separate signature (sig3) for this output, so you can reveal that private key later without giving away any other output.

We modify our previous HTLC design so you revealing the sig3 would allow me to steal this output. We do this the same way we did for that 1c going to you: send the output via a timelocked mutually signed transaction.  But there are two transaction paths in an HTLC: the got-the-riddle path and the timeout path, so we need to insert those timelocked mutually signed transactions in both of them.  First let’s append a 1 day delay to the timeout path:

Timeout path of HTLC, with locktime so it can be stolen once you give me your sig3.

Similarly, we need to append a timelocked transaction on the “got the riddle solution” path, which now needs my signature as well (otherwise you could create a replacement transaction and bypass the timelocked transaction):

Full HTLC: If you reveal Transaction 2 after we agree it’s been revoked, and I have your sig3 private key, I can spend that output before you can, down either the settlement or timeout paths.

Remember The Other Side?

Poon-Dryja channels are symmetrical, so the full version has a matching HTLC on the other side (except with my temporary keys, so you can catch me out if I use a revoked transaction).  Here’s the full diagram, just to be complete:

A complete lightning network channel with an HTLC, containing a glorious 13 transactions.

Closing The HTLC

When an HTLC is completed, we just update transaction 2, and don’t include the HTLC output.  The funds either get added to your output (R value revealed before timeout) or my output (timeout).

Note that we can have an arbitrary number of independent HTLCs in progress at once, and open and/or close as many in each transaction update as both parties agree to.

Keys, Keys Everywhere!

Each output for a revocable transaction needs to use a separate address, so we can hand the private key to the other party.  We use two disposable keys for each HTLC[1], and every new HTLC will change one of the other outputs (either mine, if I’m paying you, or yours if you’re paying me), so that needs a new key too.  That’s 3 keys, doubled for the symmetry, to give 6 keys per HTLC.

Adam Back pointed out that we can actually implement this scheme without the private key handover, and instead sign a transaction for the other side which gives them the money immediately.  This would permit more key reuse, but means we’d have to store these transactions somewhere on the off chance we needed them.

Storing just the keys is smaller, but more importantly, Section 6.2 of the paper describes using BIP 32 key hierarchies so the disposable keys are derived: after a while, you only need to store one key for all the keys the other side has given you.  This is vastly more efficient than storing a transaction for every HTLC, and indicates the scale (thousands of HTLCs per second) that the authors are thinking.

Next: Conclusion

My next post will be a TL;DR summary, and some more references to the implementation details and possibilities provided by the paper.

 


[1] The new sighash types are fairly loose, and thus allow you to attach a transaction to a different parent if it uses the same output addresses.  I think we could re-use the same keys in both paths if we ensure that the order of keys required is reversed for one, but we’d still need 4 keys, so it seems a bit too tricky.

April 06, 2015 11:21 AM

April 01, 2015

Rusty

Lightning Networks Part II: Hashed Timelock Contracts (HTLCs)

In Part I, we demonstrated Poon-Dryja channels; a generalized channel structure which used revocable transactions to ensure that old transactions wouldn’t be reused.

A channel from me<->you would allow me to efficiently send you 1c, but that doesn’t scale since it takes at least one on-blockchain transaction to set up each channel. The solution to this is to route funds via intermediaries;  in this example we’ll use the fictitious “MtBox”.

If I already have a channel with MtBox’s Payment Node, and so do you, that lets me reliably send 1c to MtBox without (usually) needing the blockchain, and it lets MtBox send you 1c with similar efficiency.

But it doesn’t give me a way to force them to send it to you; I have to trust them.  We can do better.

Bonding Unrelated Transactions using Riddles

For simplicity, let’s ignore channels for the moment.  Here’s the “trust MtBox” solution:

I send you 1c via MtBox; simplest possible version, using two independent transactions. I trust MtBox to generate its transaction after I send it mine.

What if we could bond these transactions together somehow, so that when you spend the output from the MtBox transaction, that automatically allows MtBox to spend the output from my transaction?

Here’s one way. You send me a riddle question to which nobody else knows the answer: eg. “What’s brown and sticky?”.  I then promise MtBox the 1c if they answer that riddle correctly, and tell MtBox that you know.

MtBox doesn’t know the answer, so it turns around and promises to pay you 1c if you answer “What’s brown and sticky?”. When you answer “A stick”, MtBox can pay you 1c knowing that it can collect the 1c off me.

The bitcoin blockchain is really good at riddles; in particular “what value hashes to this one?” is easy to express in the scripting language. So you pick a random secret value R, then hash it to get H, then send me H.  My transaction’s 1c output requires MtBox’s signature, and a value which hashes to H (ie. R).  MtBox adds the same requirement to its transaction output, so if you spend it, it can get its money back from me:

Two Independent Transactions, Connected by A Hash Riddle.

Handling Failure Using Timeouts

This example is too simplistic; when MtBox’s PHP script stops processing transactions, I won’t be able to get my 1c back if I’ve already published my transaction.  So we use a familiar trick from Part I, a timeout transaction which after (say) 2 days, returns the funds to me.  This output needs both my and MtBox’s signatures, and MtBox supplies me with the refund transaction containing the timeout:

Hash Riddle Transaction, With Timeout

MtBox similarly needs a timeout in case you disappear.  And it needs to make sure it gets the answer to the riddle from you within that 2 days, otherwise I might use my timeout transaction and it can’t get its money back.  To give plenty of margin, it uses a 1 day timeout:

MtBox Needs Your Riddle Answer Before It Can Answer Mine

Chaining Together

It’s fairly clear to see that longer paths are possible, using the same “timelocked” transactions.  The paper uses 1 day per hop, so if you were 5 hops away (say, me <-> MtBox <-> Carol <-> David <-> Evie <-> you) I would use a 5 day timeout to MtBox, MtBox a 4 day to Carol, etc.  A routing protocol is required, but if some routing doesn’t work two nodes can always cancel by mutual agreement (by creating timeout transaction with no locktime).

The paper refers to each set of transactions as contracts, with the following terms:

  • If you can produce to MtBox an unknown 20-byte random input data R from a known H, within two days, then MtBox will settle the contract by paying you 1c.
  • If two days have elapsed, then the above clause is null and void and the clearing process is invalidated.
  • Either party may (and should) pay out according to the terms of this contract in any method of the participants choosing and close out this contract early so long as both participants in this contract agree.

The hashing and timelock properties of the transactions are what allow them to be chained across a network, hence the term Hashed Timelock Contracts.

Next: Using Channels With Hashed Timelock Contracts.

The hashed riddle construct is cute, but as detailed above every transaction would need to be published on the blockchain, which makes it pretty pointless.  So the next step is to embed them into a Poon-Dryja channel, so that (in the normal, cooperative case) they don’t need to reach the blockchain at all.

April 01, 2015 11:46 AM

March 30, 2015

Rusty

Lightning Networks Part I: Revocable Transactions

I finally took a second swing at understanding the Lightning Network paper.  The promise of this work is exceptional: instant reliable transactions across the bitcoin network. But the implementation is complex and the draft paper reads like a grab bag of ideas; but it truly rewards close reading!  It doesn’t involve novel crypto, nor fancy bitcoin scripting tricks.

There are several techniques which are used in the paper, so I plan to concentrate on one per post and wrap up at the end.

Revision: Payment Channels

I open a payment channel to you for up to $10

A Payment Channel is a method for sending microtransactions to a single recipient, such as me paying you 1c a minute for internet access.  I create an opening transaction which has a $10 output, which can only be redeemed by a transaction input signed by you and me (or me alone, after a timeout, just in case you vanish).  That opening transaction goes into the blockchain, and we’re sure it’s bedded down.

I pay you 1c in the payment channel. Claim it any time!

Then I send you a signed transaction which spends that opening transaction output, and has two outputs: one for $9.99 to me, and one for 1c to you.  If you want, you could sign that transaction too, and publish it immediately to get your 1c.

Update: now I pay you 2c via the payment channel.

Then a minute later, I send you a signed transaction which spends that same opening transaction output, and has a $9.98 output for me, and a 2c output for you. Each minute, I send you another transaction, increasing the amount you get every time.

This works because:

  1.  Each transaction I send spends the same output; so only one of them can ever be included in the blockchain.
  2. I can’t publish them, since they need your signature and I don’t have it.
  3. At the end, you will presumably publish the last one, which is best for you.  You could publish an earlier one, and cheat yourself of money, but that’s not my problem.

Undoing A Promise: Revoking Transactions?

In the simple channel case above, we don’t have to revoke or cancel old transactions, as the only person who can spend them is the person who would be cheated.  This makes the payment channel one way: if the amount I was paying you ever went down, you could simply broadcast one of the older, more profitable transactions.

So if we wanted to revoke an old transaction, how would we do it?

There’s no native way in bitcoin to have a transaction which expires.  You can have a transaction which is valid after 5 days (using locktime), but you can’t have one which is valid until 5 days has passed.

So the only way to invalidate a transaction is to spend one of its inputs, and get that input-stealing transaction into the blockchain before the transaction you’re trying to invalidate.  That’s no good if we’re trying to update a transaction continuously (a-la payment channels) without most of them reaching the blockchain.

The Transaction Revocation Trick

But there’s a trick, as described in the paper.  We build our transaction as before (I sign, and you hold), which spends our opening transaction output, and has two outputs.  The first is a 9.99c output for me.  The second is a bit weird–it’s 1c, but needs two signatures to spend: mine and a temporary one of yours.  Indeed, I create and sign such a transaction which spends this output, and send it to you, but that transaction has a locktime of 1 day:

The first payment in a lightning-style channel.

Now, if you sign and publish that transaction, I can spend my $9.99 straight away, and you can publish that timelocked transaction tomorrow and get your 1c.

But what if we want to update the transaction?  We create a new transaction, with 9.98c output to me and 2c output to a transaction signed by both me and another temporary address of yours.  I create and sign a transaction which spends that 2c output, has a locktime of 1 day and has an output going to you, and send it to you.

We can revoke the old transaction: you simply give me the temporary private key you used for that transaction.  Weird, I know (and that’s why you had to generate a temporary address for it).  Now, if you were ever to sign and publish that old transaction, I can spend my $9.99 straight away, and create a transaction using your key and my key to spend your 1c.  Your transaction (1a below) which could spend that 1c output is timelocked, so I’ll definitely get my 1c transaction into the blockchain first (and the paper uses a timelock of 40 days, not 1).

Updating the payment in a lightning-style channel: you sent me your private key for sig2, so I could spend both outputs of Transaction 1 if you were to publish it.

So the effect is that the old transaction is revoked: if you were to ever sign and release it, I could steal all the money.  Neat trick, right?

A Minor Variation To Avoid Timeout Fallback

In the original payment channel, the opening transaction had a fallback clause: after some time, it is all spendable by me.  If you stop responding, I have to wait for this to kick in to get my money back.  Instead, the paper uses a pair of these “revocable” transaction structures.  The second is a mirror image of the first, in effect.

A full symmetric, bi-directional payment channel.

So the first output is $9.99 which needs your signature and a temporary signature of mine.  The second is  1c for meyou.  You sign the transaction, and I hold it.  You create and sign a transaction which has that $9.99 as input, a 1 day locktime, and send it to me.

Since both your and my “revocable” transactions spend the same output, only one can reach the blockchain.  They’re basically equivalent: if you send yours you must wait 1 day for your money.  If I send mine, I have to wait 1 day for my money.  But it means either of us can finalize the payment at any time, so the opening transaction doesn’t need a timeout clause.

Next…

Now we have a generalized transaction channel, which can spend the opening transaction in any way we both agree on, without trust or requiring on-blockchain updates (unless things break down).

The next post will discuss Hashed Timelock Contracts (HTLCs) which can be used to create chains of payments…

Notes For Pedants:

In the payment channel open I assume OP_CHECKLOCKTIMEVERIFY, which isn’t yet in bitcoin.  It’s simpler.

I ignore transaction fees as an unnecessary distraction.

We need malleability fixes, so you can’t mutate a transaction and break the ones which follow.  But I also need the ability to sign Transaction 1a without a complete Transaction 1 (since you can’t expose the signed version to me).  The paper proposes new SIGHASH types to allow this.

[EDIT 2015-03-30 22:11:59+10:30: We also need to sign the other symmetric transactions before signing the opening transaction.  If we released a completed opening transaction before having the other transactions, we might be stuck with no way to get our funds back (as we don’t have a “return all to me” timeout on the opening transaction)]

March 30, 2015 10:47 AM

March 26, 2015

Andreas

Hunting down a fd closing bug in Samba

In Samba I had a failing test suite. I have nss_wrapper compiled with debug messages turned on, so it showed me the following line:

NWRAP_ERROR(23052) - nwrap_he_parse_line: 3 Invalid line[TDB]: 'DB'

The file should parse a hosts file like /etc/hosts, but the debug line showed that it tried to parse a TDB (Trivial Database) file, Samba database backend. I’ve started to investigate it and wondered what was going on. This morning I called Michael Adam and we looked into the issue together. It was obvious that something closed the file descriptor for the hosts file of nss_wrapper and it was by Samba to open other files. The big question was, what the heck closes the fd. As socket_wrapper was loaded and it wraps the open() and close() call we started to add debug to the socket_wrapper code.

So first we added debug statements to the open() and close() calls to see when the fd was opened and closed. After that we wanted to see a stacktrace at the close() call to see what is the code path were it happens. Here is the code how to do this:

commit 6c632a4419b6712f975db390145419b008442865
Author:     Andreas Schneider 
AuthorDate: Thu Mar 26 11:07:38 2015 +0100
Commit:     Andreas Schneider 
CommitDate: Thu Mar 26 11:07:59 2015 +0100

    DEBUG stacktrace
---
 src/socket_wrapper.c | 37 +++++++++++++++++++++++++++++++++----
 1 file changed, 33 insertions(+), 4 deletions(-)

diff --git a/src/socket_wrapper.c b/src/socket_wrapper.c
index 1188c4e..cb73cf2 100644
--- a/src/socket_wrapper.c
+++ b/src/socket_wrapper.c
@@ -80,6 +80,8 @@
 #include <rpc/rpc.h>
 #endif
 
+#include <execinfo.h>
+
 enum swrap_dbglvl_e {
 	SWRAP_LOG_ERROR = 0,
 	SWRAP_LOG_WARN,
@@ -303,8 +305,8 @@ static void swrap_log(enum swrap_dbglvl_e dbglvl,
 		switch (dbglvl) {
 			case SWRAP_LOG_ERROR:
 				fprintf(stderr,
-					"SWRAP_ERROR(%d) - %s: %s\n",
-					(int)getpid(), func, buffer);
+					"SWRAP_ERROR(ppid=%d,pid=%d) - %s: %s\n",
+					(int)getppid(), (int)getpid(), func, buffer);
 				break;
 			case SWRAP_LOG_WARN:
 				fprintf(stderr,
@@ -565,10 +567,35 @@ static int libc_bind(int sockfd,
 	return swrap.fns.libc_bind(sockfd, addr, addrlen);
 }
 
+#define BACKTRACE_STACK_SIZE 64
 static int libc_close(int fd)
 {
 	swrap_load_lib_function(SWRAP_LIBC, close);
 
+	if (fd == 21) {
+		void *backtrace_stack[BACKTRACE_STACK_SIZE];
+		size_t backtrace_size;
+		char **backtrace_strings;
+
+		SWRAP_LOG(SWRAP_LOG_ERROR, "fd=%d", fd);
+
+		backtrace_size = backtrace(backtrace_stack,BACKTRACE_STACK_SIZE);
+		backtrace_strings = backtrace_symbols(backtrace_stack, backtrace_size);
+
+		SWRAP_LOG(SWRAP_LOG_ERROR,
+			  "BACKTRACE %lu stackframes",
+			  (unsigned long)backtrace_size);
+
+		if (backtrace_strings) {
+			size_t i;
+
+			for (i = 0; i < backtrace_size; i++) {
+				SWRAP_LOG(SWRAP_LOG_ERROR,
+					" #%lu %s", i, backtrace_strings[i]);
+			}
+		}
+	}
+
 	return swrap.fns.libc_close(fd);
 }
 
@@ -704,6 +731,8 @@ static int libc_vopen(const char *pathname, int flags, va_list ap)
 
 	fd = swrap.fns.libc_open(pathname, flags, (mode_t)mode);
 
+	SWRAP_LOG(SWRAP_LOG_ERROR, "path=%s, fd=%d", pathname, fd);
+
 	return fd;
 }
 

We found out that the code responsible for this created a pipe() to communitcate with the child and then forked. The child called close() on the second pipe file descriptor. So when another fork happend in the child, the close() on the pipe file descriptor was called again and we closed a fd of the process to a tdb, connection or something like that. So initializing the pipe fd array with -1 and only calling close() if we have a file description which is not -1, fixed the problem.

If you need a better stacktrace you should use libunwind. However socket_wrapper can be a nice little helper to find bugs with file descriptors ;)

BUG: Samba standard process model closes random files when forking more than once

flattr this!

March 26, 2015 01:22 PM

March 23, 2015

Andreas

Android 5 on the Samsung Galaxy Nexus

Another milestone, I got CyanogenMod 12.0 (Android 5.0.1) nearly fully working on the Samsung Galaxy Alpha (SLTE) Exynos version. Video playback is not working but I’m sure it will just be a matter of time …

android_5_slte

The source code is available are here.

flattr this!

March 23, 2015 09:40 PM

February 16, 2015

Andreas

cmocka 1.0

At the beginning of February I attended devconf.cz in Brno and the days before I had a hack week with Jakub Hrozek on cmocka. cmocka is a unit testing framework for C with support for mock objects.

We already rewrote the test runner last year and it was time to finish it and add support for several different message output formats. You are able to switch between cmocka standard output, Subunit, Test Anything Protocol and jUnit XML reports. In addition we we have a skip() function and test_realloc() to detect buffer overflows and memory leaks now.

You can find all other required information on the overhauled shiny new website: http://cmocka.org

flattr this!

February 16, 2015 01:50 PM

February 15, 2015

Michael

vagrant with lxc and libvirt in fedora – reprise

This is just a short follow-up to my previous post on the topic. Through Josef’s comment, I got to know of the existing packaging effort for vagrant and vagrant-libvirt in fedora, but it seemed to have stalled somewhat. With all the problems in getting the upstream RPMs to behave nicely (installing vagrant-libvirt and whatnot), I wanted to give that packaging a serious look and decieded I wanted to try to bring it forward in the light of the upcoming branching for Fedora 22. Here was a real chance to get vagrant into the next version of Fedora.

What should I say? This project sidetracked me for longer than I intended — after all, packaging is always a tedious and time-consuming task. But I did the reviews of vagrant and vagrant-libvirt, thereby doing an update of the vagrant-libvirt RPM to the latest version 0.0.24 of vagrant-libvirt, and I also created a package for vagrant-lxc. The result of the whole effort is that packages for vagrant, vagrant-libvirt and vagrant-lxc are now available in Fedora, i.e. they will be in the next release, Fedora 22. But not only that — all three packages are also available in the updates for Fedora 21 already now!

So in order to install vagrant with lxc and libvirt on Fedora 21, it does not take more than this:

sudo dnf|yum update
sudo dnf|yum install vagrant vagrant-libvirt vagrant-lxc

There is one convenience bit shipped with the vagrant-libvirt-doc package: It is a policykit rules file that allows the members of the vagrant group to use libvirt without having to authenticate for each operation. We initially wanted to ship that file installed in the main package, but this does not seem to be the approach. The are discussion was started in the vagrant-libvirt packging bug 1168333 and is continued in a bug 1187019 of its own. Here are the instructions to install that convenience mechanism manually for the time being:

sudo dnf|yum install vagrant-libvirt-doc
sudo cp /usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.24/polkit/10-vagrant-libvirt.rules /usr/share/polkit-1/rules.d/

For lxc, the RPM installs a sudoers mechanism that makes sure that members of the vagrant group can use vagrant-lxc without having to enter passwords, so everything is convenient without additional steps.

What next? I am currently thinking about packaging vagrant-cachier and maybe vagrant-mutate, too. But I need to get on with the things I actually wanted to do with vagrant, so for the time being, additional plugins have to be installed by the user with vagrant plugin install ... just as before. :-)

February 15, 2015 11:37 PM

February 13, 2015

Rusty

lguest Learns PCI

With the 1.0 virtio standard finalized by the committee (though minor non-material corrections and clarifications are still trickling in), Michael Tsirkin did the heavy lifting of writing the Linux drivers (based partly on an early prototype of mine).

But I wanted an independent implementation to test: both because OASIS insists on multiple implementations before standard ratification, but also because I wanted to make sure the code which is about to go into the merge window works well.

Thus, I began the task of making lguest understand PCI.  Fortunately, the osdev wiki has an excellent introduction on how to talk PCI on an x86 machine.  It didn’t take me too long to get a successful PCI bus scan from the guest, and start about implementing the virtio parts.

The final part (over which I procrastinated for a week) was to step through the spec and document all the requirements in the lguest comments.  I also added checks that the guest driver was behaving sufficiently, but now it’s finally done.

It also resulted in a few minor patches, and some clarification patches for the spec.  No red flags, however, so I’m reasonably confident that 3.20 will have compliant 1.0 virtio support!

February 13, 2015 06:31 AM

February 08, 2015

Jelmer

The Samba Buildfarm

Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation.

A few years later, when the project was being picked up, it was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems.

Initially regression testing on different platforms was done manually and ad-hoc.

Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process.

To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around.

The Build Farm

The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware.

The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password.

Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm.

Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs.

Our web page has a section about adding machines for new volunteers, with a long list of warnings.

Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin.

In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened.

The web site

Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that.

Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space.

Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects.

In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed.

In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning.

autobuild

Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm.

The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use.

Analysis tools

Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners.

Summer of Code

Of the last couple of years the build farm has been running happily, and hasn't changed much.

This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code.

Jenkins?

The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers.

Non-Linux platforms have become less relevant in the last couple of years, though we still care about them.

The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits.

(Thanks to Andrew Bartlett for proofreading the draft of this post.)

February 08, 2015 12:06 AM

January 20, 2015

Andreas

New uid_wrapper with full threading support.

Today I’ve released a new version of uid_wrapper (1.1.0) with full threading support. Robin Hack a colleague of mine spent a lot of time improving the code and writing tests for it. It now survives funny things like forking in a thread. We also added two missing functions and fixed several bugs. uid_wrapper is a tool to help you writing tests for your application.

If you don’t know uid_wrapper and wonder what you can do with it, here is an example:

$ id
uid=1000(asn) gid=100(users) groups=100(users),478(docker)
$ LD_PRELOAD=libuid_wrapper.so UID_WRAPPER=1 UID_WRAPPER_ROOT=1 id
uid=0(root) gid=0(root) groups=0(root)

More details about uid_wrapper can be found on the cwrap project website, here.

flattr this!

January 20, 2015 05:06 PM

January 09, 2015

Michael

vagrant with lxc and libvirt on fedora

I recently got interested in Vagrant as a means for automating setup of virtual build and test environments, especially for my Samba/CTDB development, since in particular setup of clustered Samba servers is somewhat involved, and being able to easily produce a pristine new test and development environment is highly desirable.

It took some time for me to get it right, especially because I did not choose the standard virtualbox hypervisor but wanted to stick to my environment that uses lxc for Linux and libvirt/kvm for everything else, but also to some extent because I am now using Fedora as a host and also for many Linux boxes, and I had to learn that vagrant and lxc don’t seem to play best with Fedora. Since others might profit from it as well, I thought I’d publish the results and write about them. This post is the first in a series of articles leading up an environment where vagrant up on a fedora host provisions and starts, e.g., a 3(or 4 or …)-node Samba-CTDB cluster in LXC. This first post describes the steps necessary to run vagrant with libvirt and lxc backends on Fedora 21.

Vagrant concepts

There is extensive documentation at docs.vagrantup.com, so just a few introductory words here… Vagrant is a command line program (written in ruby) that executes simple (or also more complicated) recipes to create, provision, start, and manage virtual machines or containers. The main points are disposability and reproducibility: Call vagrant up and your environment will be created out of nothing. Destroy your work environment with vagrant destroy after doing some work, and it is gone for good. Call vagrant up on the same or on a different host and there it is again in a pristine state.

Vagrantfile

The core of a vagrant environment is the Vagrantfile, which is the recipe. The Vagrantfile specifies the resulting machine by naming a virtualization provider (usually), a base image (called box in vagrant lingo) and giving instructions for further provisioning the base box. The command vagrant up executes these steps. Since the Vagrantfile is in fact a ruby program snippet, the things you can do with it are quite powerful.

Boxes

The base boxes are a concept not entirely unlike docker images. There are many pre-created boxes available on the web, that can be downloaded and stored locally, but there is also Hashicorp’s atlas, which offers a platform to publish boxes and which is directly integrated with vagrant, comparable to what the docker hub is for docker.

Providers

Vagrant was created with virtualbox as virtualization provider, but nowadays also supports docker and hyper-v out of the box. My personal work environment consists mostly of libvirt/kvm for windows and non-linux unix systems and lxc for linux, so vagrant does not immediately seem to fit. But after some research I found out that, luckily, these providers do exist externally already: vagrant-libvirt and vagrant-lxc. These and more providers like aws, vmware and native kvm can be easily installed by virtue of vagrant’s plugin system.

Plugins

Vagrant can be extended via plugins which can add providers but also other features. Two very interesting plugins that I’ve come across and would recommend to install are vagrant-cachier, which establishes a host cache for packages installed inside VMs, and vagrant-mutate, which converts boxes between different provider formats — this is especially useful for converting some of the many available virtualbox boxes to libvirt.

Provisioning

The provisioning of the VMs can be done by hand with inline or external shell scripts, but Vagrant is also integrated with puppet, ansible, chef and others for more advanced provisioning.

The vagrant command

The central command is vagrant. It has a nice cli with help for subcommands. Here is the list of subcommands that I have used most:

vagrant box      - manage boxes
vagrant plugin   - manage plugins
vagrant init     - initialize a new Vagrantfile
vagrant up       - create/provision/start the machine
vagrant ssh      - ssh into the machine as user vagrant
vagrant suspend  - suspend the machine
vagrant resume   - resume the suspended machine
vagrant halt     - stop the machine
vagrant destroy   - remove the machine completely

All data that vagrant maintains is user-specific and stored by default under ~/.vagrant.d/, e.g. plugins, required ruby gems, boxes, cache (from vagrant-cachier) and so on.

Vagrant onto Fedora

Fedora does not ship Vagrant, but there are plans to package vagrant — not sure it will make it for f22. One could install vagrant from git, but there is also an RPM on Vagrant’s download site and it installs without problems. So let’s use that for a quick start.

It is slightly irritating at first sight that the RPM does not have any dependencies though it should require ruby and some gems at the very least. But the vagrant RPM ships its own ruby and a lot of other stuff even some precompiled system libraries under /opt/vagrant/embedded, so you can start running it without even installing ruby or any gems in the system. This vagrant is configured to always use the builtin ruby and libraries, and as you will see, this does create some problems.

At this stage, you are good to go when you are using virtualbox (which I am not). For example, there is a very new fedora21-atomic image on atlas jimmidyson/fedora21-atomic. All it takes to fire up a fedora box with this base box is:

vagrant box add jimmidyson/fedora21-atomic --provider virtualbox
mkdir -p ~/vagrant/test
cd ~/vagrant/test
vagrant init jimmidyson/fedora21-atomic
vagrant up

Installing vagrant-lxc

It is actually fairly easy to install vagrant-lxc: the basic call is

vagrant plugin install vagrant-lxc

But there are some mild prerequisites, because the plugin installer wants to install some ruby-gems, in this case json. The vagrant installed from the official RPM installs the gems locally even if a matching version is present in the system, because this vagrant only ever uses the ruby stuff installed under /opt/vagrant/embedded and under ~/.vagrant.d/gems/. For vagrant-lxc, this is not a big deal, we only need to install make and gcc, because the json gem needs to compile some C file. So these are the complete steps needed to install the vagrant-lxc provider:

sudo yum install make gcc
vagrant plugin install vagrant-lxc

Afterwards, vagrant plugin list prints

$ vagrant plugin list
vagrant-lxc (1.0.1)
vagrant-share (1.1.4, system)

Now in order to make use of it, apart from having lxc installed, we need network bridges set up on the host, and we need applicable boxes. For the network, the easiest is to use libvirt network setups, since at least on Fedora, libvirt is set to default to virbr0 anyways. So my recommendation is:


sudo yum install lxc lxc-extra lxc-templates
sudo yum install libvirt-daemon-config-network

This libvirt network component can even be installed when using virtualbox, but if you are planning to use libvirt/kvm anyways, it is a perfect match to hook the lxc containers up to the same networks as the kvm machines, because they can then communicate without further ado.

The task of getting boxes is not that easy. There are really not many of them around: You can search for the lxc provider on atlas and find a dozen or so, mostly ubuntu and some debian boxes by the author of vagrant-lxc. So I needed to create some Fedora boxes myself and this was not completely trivial, in fact it was driving me almost crazy, but this is a separate story to be told. The important bit is that I succeeded and started out with Fedora 21 and 20 boxes which I published on atlas.

So to spin up a f21 lxc box, this is sufficent:

vagrant box add obnox/fedora21-64-lxc
mkdir -p ~/vagrant/lxc-test/
cd -p ~/vagrant/lxc-test/
vagrant init obnox/fedora21-64-lxc
vagrant up --provider=lxc

I find it entertaining to have sudo watch lxc-ls --fancy running in a separate terminal all the time. After bringing up the machine you can vagrant ssh into the machine and get stuff done. The working directory where your Vagrantfile is stored is bind-mounted into the container as /vagrant so you can exchange files.

Vagrant defaults to virtualbox as a provider, which is why we have to specify --provider=lxc. In order to avoid it, one can either set the environment variable VAGRANT_DEFAULT_PROVIDER to the provider of choice, or add config.vm.provider :lxc to the Vagrantfile. One can also add a block for the provider to enter provider-specific options, for instance to set the lxc container name to be used. Here is an example of a marginally useful Vagrantfile:

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end 

  config.vm.define "foo"
  config.vm.box = "obnox/fedora21-64-lxc"
  config.vm.provider :lxc do |lxc|
    lxc.container_name = "vagrant-test-007"
  end
  config.vm.hostname = "test-007"
  config.vm.provision :shell, inline: "echo Hello, world!"
end

Note the conditional configuration of vagrant-cachier: If installed, users of the same base box will benefit from a common yum cache on the host. This can drastically reduce machine creation times, so better make sure it is installed:

vagrant plugin install vagrant-cachier

Installing vagrant-libvirt

Now on to use vagrant with libvirt. In principle, it is as easy as calling

vagrant plugin install vagrant-libvirt

But again there are a few prerequisites and surprisingly also pitfalls. As mentioned above, the vagrant-libvirt installer wants to install some ruby gems, specifically ruby-libvirt, even when a matching version of the gem is already installed in the system. In addition to make and gcc, this plugin also needs the libvirt-devel package. Now the plugin installation failed when I tried to reproduce it on a pristine system with a strange error: The linker was complaining about certain symbols not being available:

"gcc -o conftest -I/opt/vagrant/embedded/include/ruby-2.0.0/x86_64-linux -I/opt/vagrant/embedded/include/ruby-2.0.0/ruby/backward -I/opt/vagrant/embedded/include/ruby-2.0.0 -I. -I/vagrant-substrate/staging/embedded/include -I/vagrant-substrate/staging/embedded/include -fPIC conftest.c -L. -L/opt/vagrant/embedded/lib -Wl,-R/opt/vagrant/embedded/lib -L/vagrant-substrate/staging/embedded/lib -Wl,-R/vagrant-substrate/staging/embedded/lib -lvirt '-Wl,-rpath,/../lib' -Wl,-R -Wl,/opt/vagrant/embedded/lib -L/opt/vagrant/embedded/lib -lruby -lpthread -lrt -ldl -lcrypt -lm -lc"
/lib64/libsystemd.so.0: undefined reference to `lzma_stream_decoder@XZ_5.0'
/lib64/libxml2.so.2: undefined reference to `lzma_auto_decoder@XZ_5.0'
/lib64/libxml2.so.2: undefined reference to `lzma_properties_decode@XZ_5.0'
/lib64/libsystemd.so.0: undefined reference to `lzma_end@XZ_5.0'
/lib64/libsystemd.so.0: undefined reference to `lzma_code@XZ_5.0'
collect2: error: ld returned 1 exit status
checked program was:
/* begin */
1: #include "ruby.h"
2:
3: int main(int argc, char **argv)
4: {
5: return 0;
6: }
/* end */

This drove me nuts for quite a while, since no matter which libraries and programs I installed or uninstalled on the host, it would still fail the same way. The explanation is that there is the system-installed lzma library that libxml2 uses and that uses symbol versioning. But the vagrant RPM ships its own copy in /opt/vagrant/embedded/lib/liblzma.so.5.0.7, so with all the linker settings to the gcc call, this supersedes the system-installed one and the symbol dependencies fail. In the end, I found the cure comparing one system that worked and another that didn’t: The gold linker can do it, while the legacy bfd linker can’t. Spooky…

So finally here is the minimal set of commands you need to install vagrant-libvirt on Fedora 21:

sudo yum install -y vagrant_1.7.1_x86_64.rpm
sudo yum install -y make gcc libvirt-devel
sudo alternatives --set ld /usr/bin/ld.gold
vagrant plugin install vagrant-libvirt

Of course, in order for this to be really useful, one needs to install libvirt properly, I do

yum install libvirt-daemon-kvm

because I want to use the kvm backend.
Afterwards, you can bring up a fedora box like this:

vagrant box add jimmidyson/fedora21-atomic --provider libvirt
mkdir -p ~/vagrant/test
cd ~/vagrant/test
vagrant init jimmidyson/fedora21-atomic --provider libvirt
vagrant up

As already mentioned it is a good idea to install the vagrant-cachier and vagrant-mutate plugins:

vagrant plugin install vagrant-cachier
sudo yum install -y qemu-img
vagrant plugin install vagrant-mutate

With the mutate plugin you can convert some of the many virtualbox boxes to libvirt.

For the fun of it, here is the vagrantfile, I used to develop and verify the minimal installation procedure inside vagrant-lxc… ;-)

VAGRANTFILE_API_VERSION = 2 

INSTALL_VAGRANT = <<SCRIPT
set -e
yum install -y /vagrant/vagrant_1.7.1_x86_64.rpm
yum install -y make gcc
sudo -u vagrant vagrant plugin install vagrant-lxc
yum install -y libvirt-devel
alternatives --set ld /usr/bin/ld.gold
sudo -u vagrant vagrant plugin install vagrant-libvirt
sudo -u vagrant vagrant plugin install vagrant-cachier
yum install -y qemu-img
sudo -u vagrant vagrant plugin install vagrant-mutate
sudo -u vagrant vagrant plugin list
SCRIPT

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end 

  config.vm.define "vagrant-test" do |machine|
    machine.vm.box      = "obnox/fedora21-64-lxc"
    machine.vm.hostname = "vagrant-test"
    machine.vm.provider :lxc do |lxc|
      lxc.container_name = "vagrant-test"
    end
    machine.vm.provision :shell, inline: INSTALL_VAGRANT
  end
end

Summary

So now we are in the position to work with vagrant-lxc and vagrant-libvirt on fedora, and we also have fedora lxc boxes available. I am really intrigued by the ease of creating virtual machines and containers. From here on, the general and provider-specific guides on the net apply.

Here is a mock-up transcript of the steps taken to set up the environment:

# use current download link from https://www.vagrantup.com/downloads.html
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.rpm
sudo yum install vagrant_1.7.2_x86_64.rpm
sudo yum install make gcc libvirt-devel qemu-img
sudo yum install lxc lxc-extra lxc-template
sudo yum install libvirt-daemon-kvm

vagrant plugin install vagrant-lxc
vagrant plugin install vagrant-libvirt
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-mutate

vagrant box add obnox/fedora21-64-lxc
vagrant box add obnox/fedora20-64-lxc
vagrant box add jimmidyson/fedora21-atomic --provider libvirt
vagrant box add purpleidea/centos-7.0
vagrant box add ubuntu/trusty64
vagrant box repackage ubuntu/trusty64  virtualbox 14.04
mv package.box trusty64.box
vagrant mutate trusty64 libvirt
...

What next?

I will post follow-up articles about the problems creating Fedora-lxc-boxes and about the general Vagrantfile mechanism for bringing up complete samba-ctdb clusters. I might also look into using vagrant from sources, potentially looking into fedora packaging, in order to circumvent the discomfort of running a bundled version of ruby, the universe, and everything… ;-)

January 09, 2015 04:26 PM

January 08, 2015

Andreas

Taking your bike on a plane

Here is a totally computer unrelated post! Several people asked me how do I protect my bike to transport it safely on a plane. One possibility is to use a bike box, but the issue with a box is that airline personal likes big boxes, because they can use it to pile a lot of suitcases on it. I prefer to just wrap it with cardboard! Normally I go to a supermarket and ask if they have some cardboard for me. I’m sure they are happy to get rid of some. What you need bring from home in addition is a piece of rope, zip ties, duct tape, an old towel and a multitool or cutter.

I prepare everything at the supermarket. Cut the cardboard for the different pieces of the bike, put holes in the cardboard for the zip ties (first put duct tape on the cardboard then make the hole trough the duct tape and the cardboard!). Make sure you can still push the bike, the wheels should turn around. In the end I have a small package like this:

Bike on a plane, cardboard collection

It is easy to transport. Either on the back of the bike or on your back ;)

At the airport you remove the pedals and fix the crank. Put the towel over the saddle and fix it with duct tape or a piece of rope. Tie a rope from the saddle to the handle bar so you can carry it. This makes it also easier for the airport personal to lift it. Then cover the bike with cardboard. Some parts are fixed on the bike with zip ties so they can’t move. In the end it normally looks like this:

Bike on a plane, protected with cardboard

If you’re on a bike trip you normally have 4 bike panniers with you, but the airline only allows to carry one luggage. Either the airport offers to wrap them or you go to a grocery store and buy plastfoil for food. It is very thin so you need 60m. It is not really Eco-friendly but the only way I know. Suggestions are welcome.

First start to wrap the two biggest panniers:

bike on a plane, panniers.

I use a rope to connect them and make sure not to loose one. After the two are in a good shape (ca. 25m) put the smaller panniers on the side and start to wrap the whole thing:

bike on a plane, wrapped panniers

Have a safe trip!

flattr this!

January 08, 2015 06:46 PM

December 20, 2014

Michael

taming the thinkpad’s terrible touchpad

After many years of working with X-series thinkpads, I have come to love these devices. Great keyboard, powerful while very portable and durable and so on. But I am especially an addict of the trackpoint. It allows me to use the mouse from time to time without having to move my fingers away from the typing position. The x230 was the first model I used that additionally features a touchpad. Well, I hate these touchpads! I keep moving the mouse pointer with the balls of my thumbs while typing, which is particularly irritating since I have my system configured to “focus-follows-mouse”. Now with the x230 that was not a big problem, because the I could simply disable the touchpad in the BIOS and keep using the trackpoint and the three mouse buttons that are positioned between keyboard and touchpad. So far so good.

Since three weeks now, since my start at Red Hat, I am using an x240. It is again really nicely done. Great display, very powerful, awesome battery life, … But Lenovo has imho committed an unspeakable sin with the change to the touchpad: The sparate mouse buttons are gone, and instead there are soft keys integrated into regions of the touchpad. Not only are the buttons much harder to hit, since the areas are much harder to feel with the fingertips than the comparatively thick buttons of the earlier models, but the really insane consequence for me is that I can’t disable the touchpad in the BIOS, since that also disables the buttons! This rendered the laptop almost unusable unless docked, with external mouse and keyboard. It was a real pain. :-(

But two days ago GLADIAC THE SAVIOR gave the the decisive hint: Set the TouchpadOff option of synaptics to value 1. Synaptics is, as I learned, the Xorg X11 touchpad driver. And this option disables the touchpad except for the button functionality. Exactly what I need. With a little bit of research I found out that my brand new Fedora 21 supports this out of the box. Because I am still finding my way around fedora, I only needed to find the proper place to add the option. As it turns out,

/usr/share/X11/xorg.conf.d/50-synaptics.conf

is the appropriate file, and I added the option to the section of “Lenovo TrackPoint top software buttons”.
Here is the complete patch that saved me:

--- /usr/share/X11/xorg.conf.d/50-synaptics.conf.ORIG 2014-12-18 22:53:18.454197721 +0100
+++ /usr/share/X11/xorg.conf.d/50-synaptics.conf 2014-12-19 09:03:44.143825508 +0100
@@ -57,13 +57,14 @@
# Some devices have the buttons on the top of the touchpad. For those, set # the secondary button area to exactly that. # Affected: All Haswell Lenovos and *431* models # # Note the touchpad_softbutton_top tag is a temporary solution, we're working # on a more permanent solution upstream (likely adding INPUT_PROP_TOPBUTTONPAD) Section "InputClass" Identifier "Lenovo TrackPoint top software buttons" MatchDriver "synaptics" MatchTag "touchpad_softbutton_top" Option "HasSecondarySoftButtons" "on" + Option "TouchpadOff" "1" EndSection

Now I can enjoy working with the undocked thinkpad again!

Thanks gladiac! :-)

And thanks of course to the developers of the synaptic driver…

December 20, 2014 11:21 PM

December 19, 2014

Michael

tmux with screen-like key-bindings

I recently starting switching from screen to tmux for my daily workflow, partly triggered by the increasing use of tmate for pair-programming sessions.

For that purpose I wanted the key-bindings to be as similar as possible to the ones I am used to from screen, which mostly involving the prefix (hotkey) from Ctrl-b to Ctrl-a. This is achieved in the awesome tmux configuration file ~/.tmux.conf by

set-option -g prefix C-a

Now in screen, you can send the hotkey through to the application by typing Ctrl-a followed by a plain a. I use this frequently, e.g. for sending Ctrl-a to the shell prompt instead of pos1. Tmux offers the send-prefix command specifically for this purpose, which can be bound do a key. My ~/.tmux.conf file already contained

bind-key a send-prefix

According to the tmux manual page, this complete snippet should make it work:


set-option -g prefix C-a
unbind-key C-b
bind-key C-a send-prefix

but it was not working for me! :-(

Entering the command bind-key C-a send-prefix in tmux interactively (command prompt triggered with Ctrl-a :) worked though. After experimenting a bit, I found out that the order of options seems to matter here: Once I put the bind-key a send-prefix line before the set -g prefix C-a one, it started working. So here is the complete snippet that works for me:


bind-key C-a send-prefix
unbind-key C-b
set-option -g C-a

I am not sure whether this is some speciality with my new fedora. On a debian system, the problem does not seem to exist...

In order to complete the screen-like mapping of Ctrl-a, let me mention that bind-key C-a last-window lets double Ctrl-a toggle between the two most recent tmux session windows. So here is the complete part of my config with respect to setting Ctrl-a as the hot-key:


bind-key C-a send-prefix
unbind-key C-b
set-option -g prefix C-a
bind-key C-a last-window

Note that the placement of the last-window setting does not make a difference.

December 19, 2014 11:27 AM

December 10, 2014

David

Accénts & Ümlauts - A Custom Keyboard Layout on Linux

table#t01 { width: 100%; border: 1px solid black; border-collapse: collapse; } table#t01 td { padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } table#t01 tr:nth-child(even) { background-color: #eee; } table#t01 tr:nth-child(odd) { background-color: #fff; } table#t01 th { background-color: black; color: white; padding: 5px; text-align: center; border: 1px solid black; border-collapse: collapse; } As a native English speaker living in Germany, I need to be able to reach the full Germanic alphabet without using long key combinations or (gasp) resorting to a German keyboard layout.

Accents and umlauts on US keyboards aren't only useful for expats. They're also enjoyed (or abused) by a number of English speaking subcultures:

  • Hipsters: "This is such a naïve café."
  • Metal heads: "Did you hear Spın̈al Tap are touring with Motörhead this year?"
  • Teenage gamers: "über pwnage!"

The standard US system keyboard layout can be enhanced to offer German characters via the following key mappings:
Key Key + Shift Key + AltGr (Right Alt) Key + AltGr + Shift
e E é É
u U ü Ü
o O ö Ö
a A ä Ä
s S ß ß
5 %

With openSUSE 13.2, this can be configured by first defining the mappings in /usr/share/X11/xkb/symbols/us_de:
partial default alphanumeric_keys
xkb_symbols "basic" {
name[Group1]= "US/ASCII";
include "us"

key <AD03> {[e, E, eacute, Eacute]};
key <AD07> {[u, U, udiaeresis, Udiaeresis]};
key <AD09> {[o, O, odiaeresis, Odiaeresis]};
key <AC01> {[a, A, adiaeresis, Adiaeresis]};
key <AC02> {[s, S, ssharp, ssharp]};
key <AE05> {[NoSymbol, NoSymbol, EuroSign]};

key <RALT> {type[Group1]="TWO_LEVEL",
[ISO_Level3_Shift, ISO_Level3_Shift]};

modifier_map Mod5 {<RALT>};
};

Secondly, specify the keyboard layout as the system default in /etc/X11/xorg.conf.d/00-keyboard.conf:
Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "us_de"
EndSection

Achtung!: IBus may be configured to override the system keyboard layout - ensure this is not the case in Ibus Preferences:
Once in place, the key mappings can be easily modified to suit specific tastes or languages - viel Spaß!

December 10, 2014 08:56 PM

Last updated: May 28, 2015 05:00 AM

Donations


Nowadays, the Samba Team needs a dollar instead of pizza ;-)

Beyond Samba

Releases