Archive for the ‘Open Source’ Category

Upgrading to HTTP2

Upgrading to HTTP/2 on Apache is incredible easy on a Debian testing:

First, let’s get Let’s Encrpyt, so we have HTTPS traffic.

$ apt-get install certbot python-certbot-apache

In the wizard enable all sites. Preferable force redirect to HTTPS.

Once that is installed, you can enable HTTP2.
Note: If you are using HTTP2 with Prefork you must use PHP-FPM or FastCGI, mod_php will not work.
If you are using threads or events everything will be fine. Apache 2 on Debian testing defaults to event.

Now enable the protocol in the site that was generated by certbot, e.g.:

vim /etc/apache2/sites-available/

Add the line “Protocols h2 http/1.1”.

systemctl restart apache2

DONE! Welcome HTTP.

p.s.: great video about the benefits and drawbacks of HTTP2:

Lessons Learned: Writing a filesystem in D

I recall when people first proposed writing a read-only filesystem for an internal project at work. While I cannot talk much about what we have implemented, I can at least say it made and still makes sense to solve our problem by implementing a filesystem.

Filesystems can be written in many ways and their implementation specifics and problems they try to solve can range from easy to very hard. Filesystems such as BTRFS, ext4 or zfs take years or even decades to write and stabilize. They are implemented in kernel land, hard to debug and use sophisticated data structures for maximum performance and reliability. However with the addition of FUSE in the 2.6.14 Linux kernel and it’s subsequent porting to other POSIX operating systems such as FreeBSD, OpenBSD and MacOS, it became much easier to write and prototype filesystems. These filesystems are userland processes that talk to a simple kernel driver that redirects filesystems calls to the userland process when necessary. In our particular case, we didn’t need maximum performance, and we could heavily rely on kernel side caching, which made writing a fuse filesystem a reasonable choice.


In order to get a proof of concept, I decided to write a first version in Python. fusepy is a well known and stable implementation. Writing the initial prototype was a pleasure. Most things worked out of the box and we knew our idea works. However filesystems aren’t your standard piece of software and come with a set of performance characteristics that made fusepy and python a good choice for a prototype but a rather mediocre choice for a production-ready implementation. Most programs obtain data, do some processing and write the data somewhere. Most of their time is spend in userland running the program. Filesystems are slightly different. A program like ls will try to obtain the information of a list of files by issuing multiple, often hundreds of stat calls to the kernel, causing context switches into the kernel. Once the kernel figured out that we are a fuse filesystem it will hand the task of returning the file information to the fuse kernel module. The fuse module will then take a look into a cache and if necessary issue a request via IPC to the fuse userland daemon and subsequently to the userland program. At that point we already doubled our context-switches in contrast to a regular kernel-filesystem. Now that we spooled up a stat request in userland we try to return it with at least effort as possible (we still have 99 other requests to go) and return it back to the kernel, which then returns the result via the virtual filesystem layer back to the userland process which issued the request, effectively leading to 100% context-switch overhead to traditional kernel filesystems. To make things worse, python will also take a global interpreter lock in the meantime, allowing one request at a time to be served. There is also an overhead in calling a python function compared to a C function, which given the amount of requests, can add up significantly. In addition python will need to do some datastructure juggling. While this barely gives you a full picture as to why python is not necessarily a perfect choice for a production-ready implementation, it’s enough to say, it’s not optimal.

Choosing a language

Now that we have our prototype we want to get a production-ready version fast, but we also want to minimize our overhead to get good performance. At this point we want to consider a reimplementation of the python prototype in a language that is more suitable for high-performance systems programming. What are our common choices? C, C++, Go, Rust, Haskell and D are common competitors in the systems programming world. So let’s see…


Without a doubt C/C++ is reasonable choice. They have by far the best support for FUSE as libfuse is written in C. They are great for high-performance applications. However you will need to do your memory management yourself and deal with a class of potential errors which might not even exists in other languages such as a multitude of undefined behavior, memory management and certain types of overflows. In the end while we all agreed they are conservative reasonable choices we opted for languages which we believed get us faster to a working version.


Go seemed interesting but actually fell through quite fast upon realization that calling from C into Go can lead to a 10x overhead due to stack-rewriting. This is caused by Go not following C calling conventions and uses different stack layouts.


A great choice, but due to lack of expertise, ramp up time, etc we decided against it.


Certainly nowadays a great choice. The project I am talking about is a bit older and back then rust was not stable at all and could very well change behavior. So in addition to learning a new language we didn’t want to deal with a constant change of APIs and language features. These days I would certainly reconsider using it.

D and the good parts

As mentioned in earlier blogposts. D feels like Python in many ways but has C performance characteristics. In addition, it’s fairly mature, it uses C calling conventions and we had expertise in using it. It works well on MacOS and Linux which is all we cared (in fact D works pretty well on Windows too). So we went for it. We knew from the beginning we had to write a smaller wrapper around libfuse. D is very similar to C and C++ and really cares about interoperability. So writing a wrapper seemed to be an easy task and in fact within a few days we had a working version that grew as we needed. Once we started rewriting the python prototype on top of the new library, we were quite impressed how similar the languages seem to be and how fast we got to a working version. D’s string handling and it’s build in hashmaps and dynamic arrays made it a pleasure to work with.

The not-so-good parts

One thing we noted fairly quickly is that synchronization between the D wrapper and the C header files can be painful. If they change you will not notice unless you know it changed. The header files can diverge and miss some important calls. We had to add some wrappers in the standard library in order to get what we needed. Worst, if you make a mistake in the data layout your program will compile fine. However your program might randomly crash (that’s the good case) or just perform weird in rare cases as you might have written data to a wrongly aligned field and the C library will happy read whatever you write and take it as something completely else (I can tell you, it’s really fun when you get random results from your stat() syscall).

When porting to MacOSX it turned out that MacOS encodes the information if the operating systems supports 64-bit inodes in C header files and heavily depends on ifdefs around it which we could simply not port to D as it (luckily) doesn’t have a preprocessor. So we had to start creating unpleasant workarounds. Another issue we came along is D’s shared attribute. The basic idea that you have a transitive notion of shared data which can safely be shared across multiple threads. This works great as long as you assume that you have a thread-local store. The D compiler and the runtime supports all this by default, however it’s based on the assumption that the D runtime controls the threading and the data passed around…

A key difference
At this point it became apparent that there is a key difference between your standard D project and what we are trying to do. In most projects, your program is the main program and it calls into other libraries. In the FUSE world your program is wrapped in the libfuse context and only interacts with the world through callbacks you handed into libfuse. At that point you completely pass the control to fuse and your code only get’s called when fuse decides so. And worst, libfuse will also control the threading.

So back to shared. Well we initialize all our datastructures before libfuse starts threading, we then pass the datastructures around. At that point we either make everything shared and hope it somehow works, but that turned out to break half of the code due to incompatible qualifiers in the D libraries or we just go ahead and declare everything as __gshared and basically circumvent the whole shared business. De-facto everything our code is shared anyway. So if we have to use interfaces to that require shared qualifiers we just cast bakck and forth. That wasn’t really pretty and we started realizing that this is just the tip of the iceberg.

One bug came to mind. On a slightly unrelated note: we tried D’s contracts which caused the compiler to crash. I wasn’t necessarily impressed with discovering a compiler bug in a fairly straight forward code. But well, those things happen.

The thing with the runtime

At that point we had everything reimplemented and it worked like a charm. So we decided to enable multithreading in libfuse. Suddenly our program started to crash instantly. We rallied all our D expertise. People who have been working on D itself, it’s libraries, alternative compiler implementation etc and tried to debug what the heck is going on. We soon found out, disabling garbage collection made the problem go away. While that is “okay” for a short living program, it’s a not-so-okay workaround for a filesystem that potentially runs as long as your computer runs. So more digging into the Druntime and eventually after days we figured out that the druntime didn’t know about the threads the fuse library created. So when the garbage collector comes and tries to stop the world (as in, stop everything but the current thread) and perform the GC, it won’t know about the other threads but the current and stops nothing. So all the threads happily write data while the garbage collectors tries to run. BOOM!. However libfuse doesn’t tell you when and how many threads it will create. So how will you be able to tell the druntime about them? In the end upon every call we just check if we know about the current thread and if not attach it and then correctly detach it later. There were some MacOS specific problems around multithreading as osxfuse tried to join certain threads from time to time which the druntime considered to be attached. A long and tough deep-dive into OSXfuse and core.thread followed until eventually with some help from the D folks we figured out that we have to detach the thread and have to use pthread_cleanup to allow us to detach from the druntime.

In the end lessons were learned and in a more general notion, embedding a runtime, in particular one with a garbage collector into a project which get’s called from C/C++ contexts leads to a lot of “interesting” and mostly undocumented issues.

A final note

In the end I am not sure if I would choose D again. Modern C++14 is fairly nice to write and the amount of manual memory management in our case isn’t that bad. However now that we solved all the runtime issues and have a fairly stable version, we get back to the fun and turn around times which we like. So overall there is still a net win in choosing D.

Also D moves more and more towards a safe but gc-less memory model, which is something we for sure want to investigate once more and more of the ref-counting code and allocator code lands in upcoming D versions. I am still happy with D in general, but a bit less optimistic and more critical when choosing it then before, and particularly when writing a filesystem.

The D Language: A sweet-spot between Python and C

The D Language: A sweetspot between Python and C

Python has been one of my favorite languages since I started contributing to the Mercurial project. In fact Mercurial being in Python instead of git’s C/Bash codebase was an incentive to start working on Mercurial. I admired it’s clean syntax, it’s functional patterns, including laziness through generators and it’s ease to use. Python is very expressive and thanks to battery-included, it’s very powerful right from the beginning. I wanted to write everything in Python, but I reached limitations. CPU intensive code couldn’t be easily parallelized, it’s general performance was limiting and you had to either ship your own python or rely on system python.

For deep system integration into libraries, performance and static binaries I’ve still relied on C. One might argue that I should have used C++ over C but I really loved C’s simplicity. All it’s syntax could fit in my brain and while it’s not typesafe (see void pointer), it’s incredible fast and you can choose from the largest pool of libraries available.

So while C gave me the power and performance, Python gave me memory safety, expressiveness and not to worry about types. I’ve always looked for a middle-ground, something fast, something I can easily hook into C (before there was cython) and something expressive. Last year I found D and were lucky enough tobe able to be allowed to implement a Fuse filesystem in D and an Open Source library around it: dfuse.

For once it feels I found a language that hit’s a sweet-spot between expressiveness, close to the system, and performance. There are drawbacks, for sure. I’ll talk about that in another post.

What is D? D is a programming language developer by Walter Bright and Andrei Alexandrescu over the last 10 years. It’s newest incarnation is D2 which has been developed since 2007 and available since 2010. Both have worked extensively with C++ in the past and incorporated good parts as well as redesigning the bad parts. The current version is 2.066.1. Multiple compilers are available, the reference implementation DMD, a GCC frontend called GDC and an LLVM frontend named LDC. In addition a runtime is distributed which provides thread handling, garbage collection, etc. A standard library is available, called phobos and usually distributed together with the druntime (in fact druntime is compiled into the standard library).

What makes D beautiful? First of all, D is statically typed, but in most cases types are inferred through `auto` (which C++ borrowed later on). Secondly, D has excellent support for interacting with C and C++ (just recently namespace resolution for C++ symbols was added). But most important, D offers a variety of high level data structures as part of the language. This allows for fast and efficient prototyping. Solutions in D tend to be a nice mixture between a “python”-style solution using standard datatypes and a “C”-style approach of using specialized datastructures when needed.

In addition, D offers a very elegant template syntax to provide powerful abstractions. For example the following code generates a highly optimized version of the filter for every lamda you pass in (unlike C where we would need a pointer dereference):

But my arguable most loved feature is the unified function call syntax. Think about a structure that has a set of functions:

but you want to add a new method `drop`, however you can’t modify the original code. UCFS allows you to use the dot-notation to indicate the first argument:

This gives a powerful way to add abstractions on top of existing libraries without having to obtain or fork the source code.

Get Started

If you are interested in giving D a try, a few useful resources:

Compiler: The DMD compiler is the reference compiler. It does not produce as efficient code as GDC but usually supports the most recent features. You can obtain it from

Introductions and Books: I really like ‘The D Programming Language‘ by Andrei Alexandrescu. Sadly the book must be purchased but it’s worth it. is an excellent tutorial by Ali Çehreli.

Help: The D community is very approchable and helpful. Just write a post on the D forums or Mailinglist:

Hacking: Want to hack on D or it’s standard library Phobos? It’s all on github:

This is just a very short overview why I start loving D, I’ll go into more details about drawbacks, C interaction and using the standard library in the next few weeks. More D to come.

Mercurial vs. Git vs. Bazaar: The aftermath

Over the last years, the version control system community has fought, what some people would call the “VCS war”. People were arguing on IRC, conferences, mailinglists, they wrote blog posts and upvoted HN articles about which was the best version control system out there. The term “VCS war” is borrowed from the “Editor wars”. A constant fight which people argue which of the major editors VIM, or Emacs and later TextMate or Sublime, or again Vim and Emacs is the best editor. It is similar to programming language discussions, shell environments, window manager and so on and so forth. What they all have in common is that they are tools that are used daily by software engineers, and therefore a lot of people have an opinion on it.

When in 2005 both Git and Mercurial were released and Bazaar followed shortly after, the fight who is the best system of the three started. While early Mercurial versions seemed to be much easier to use than Git, Git was already used in the Linux kernel and built up a strong opinion. Things were even till 2008 Github launched and changed the OpenSource world and is what people would consider Git’s “killer app”. Mercurial’s equivalent Bitbucket never reached the popularity of Github. But people were still writing articles, arguing about merging and rebasing, arguing about performance and abilities to rewrite history, wrote long blog posts about confusing branching strategies. Those were complicated enough that they had to write helper tools, about which they could write articles again….and so on and so forth.

Recently things have become quiet. But why is that? What happend to Git, Mercurial and Bazaar?


I haven’t followed bazaars history much. It’s most notable users were MySQL and Ubuntu. In the early development bazaar lacked performance and couldn’t keep up with Git and Mercurial. It tried to solve this by changing the on-disk format a few times, requiring their users to upgrade their servers and clients. The development was mostly driven by Canonical and they had a hard time reaching out for more active developers. In the end there isn’t much to say about Bazaar. It development slowly deceased and it’s been widely considered the big looser of the VCS wars. Bazaar is dead.


Mercurial started out for the very same reason Git was created and was developed at the same time Linux wrote Git. They both had a fast growing active development group and were equally used in the first years. While Git was the “faster” decentralized version control system, Mercurial was widely considered the more user-friendly system. Nevertheless with the rise of Github, Mercurial lost traction. However the development continued and while more and more people used Git and Github, the Mercurial community worked on some new ideas. Python picked it as it’s version control system in 2012 and Facebook made moved to Mercurial in 2013. So what’s so interesting about Mercurial?

  1. Mercurial is extensible: It’s written mostly in Python and has a powerful extension API. Writing a proof of concept of a new backend or adding additional data that is transferred on cloned is fairly easy. This is a big win for the Python or the Mozilla community that makes it easy for them to adapt Mercurial to their needs.
  2. Mercurial caught up on Git features and performance: Mercurial added “bookmarks”, “rebase” and various other commands to it’s core functionality and constantly improved performance.
  3. Mercurial has new ideas: Mercurial came up with three brilliant ideas in the last 3 years. They first introduced a query language called “revsets” which helps you to easily query the commit graph. Second, they introduced “phases”. A barrier that prevents user from accidentally changing or rebsaing already published changesets – a common mistake for Git users. And last, but not least Evolution Changeset, a experimental feature that helps you to safely modify history and keep track of the history of a changing commit.

So while Mercurial is certainly not the winner, it found a niche with a healthy and enthusiastic community. It’s worth a shot trying it if you are not 100% happy with Git.


The big winner obviously is Git. With the introduction of Github pushed Git usage. Github’s easy to approach fork&merge mechanism revolutionized OpenSource development to a point where most younger projects don’t use mailinglists anymore but rather rely on pull-requests and discussion on Github issues. Github’s feature and community is attractive enough for people to learn git. In addition, Git had a healthy and vocal community creating blog posts, introduction videos and detailed technical explanations. Noways Git market share is big enough that companies move from Subversion to Git because a new hire will more likely know Git than any other version control system (maybe SVN). As an open source developer, there is no way around Git anymore. Moreover the development is going on in rapid pace and the community constantly improves performance and is slowly reaching the v2.0 milestone. It’s yet to be seen if they are going to port some of the ideas from Git. A major challenge for Git however, still, is to deal with large repositories, something that at least the Mercurial community has partly solved. If you haven’t learned it, learn it, there isn’t going to be a way around it anyway – deal with it.

A conclusion

The war is over, and we are all back on working on interesting features with our favorite Version Control System. Nobody needs to write blog posts anymore which system is better and you certainly won’t be able to circumvent Git entirely.

Short update…

$ curl -I | grep X-Powered-By
X-Powered-By: HPHP


Play around: BGP and the DN42

As far as I am concerned, networking is one of the most fascinating aspects of computing. Connecting people and systems sounds like an easy problem to solve, but looking into the details and the scale of something like the internet, shows that networking is far from easy. While most developers and administrators understand the basics of local IP routing and maybe even OSPF, not many understand how global scale, carrier focused networking works. To understand how the internet works one has to understand routing. To understanding routing on a global, internet scale level, one has to understand the exterior Border Gateway Protocol (eBGP).

Now with our local setup’s, BGP isn’t really something that we use on a daily base (unless you work on the DE-CIX, ASM-IX or at Level3). We need to a bigger network to learn about the details. While we are obviously not being able to learn about BGP on the real internet, we want to build something similiar to the internet to help us learn and hack on this stuff.

welcome to the dn42

The dn42 is a darknet build by members of the German Chaos Computer Club. It connects people on a ISP level and somewhat replicate common internet services like a registry, anycast DNS, whois, etc. The project aims to facilitate understanding how the internet works and build up a darknet at the same time.

The dn42 uses the address range out of IANAs reserved, non-publicly routed subnets. A participant of the dn42 usually allocates a /24 from this pool by entering the necessary information into the registry. The registry is build similar to the RIPE/ANIC registry. Once a subnet is allocated, a participant has to start peering and announcing the network. The participant has a role similar to a e.g. an ISP in the internet, who announces his allocated subnet to the internet. Just like ISPs and Tier3 up to Tier1 carriers, participants in the DN42 talk eBGP with each other over a secured line.

how the internet works (for dummies and simple..blabla)

So how does your computer know how to get to Your PC got a gateway address from your ISP, so your computer is simply sending everything he doesn’t know how to route to the gateway. But how does the ISP know where the computer is? The internet folks thought about this for a very long time before they came up with “BGP”. In BGP, an internet router advertises which routes he can handle himself to his neighbors. For example the German Telekom asks the European Internet Registry for a subnet. Once it got his subnet allocated, their routers start to announce to their peering partners (e.g. French Telecom, Austrian Telecom, Google, etc) that they are now capable of routing the subnet. The neighbors itself announce this new route to their neighbors and so on. Routes are handled by autonomous systems (AS). The German Telekom owns an autonomous system and within the autonomous system, they do whatever routing necessary to get to the right computer. For other autonomous systems like French Telecoms all that matters is that subnet W.X.Y.Z/NN is handled by the German Telekom AS, so they know to sent a package designated for that subnet to the German Telecom AS and don’t have to deal with it any further.. So if the French Telecom sees a package in their network that they know the German Telekom can handle, they are just forwarding it to it, and be done with it.

In DN42 this the network works just like that, just in small scale. You are a big ISP. You have an AS number, a number of peering AS and you announce the subnet that you can route. All autonomous systems will deliver packages for that subnet to you. It’s your responsibility to route them properly. In addition if you can reach an AS faster (in less hops) than another, your peers will start sending your packages to forward to the appropriate AS. So as an ISP you handle traffic designated for you, but also forward packages to others AS if necessary.

your first subnet

So you allocated your new /24 subnet. As an owner of the fresh subnet, one has to start announcing the subnet within the dn42 by telling all other participants in the network that your router is responsible for routing the whole subnet, therefore “announcing” the particular route to the network. Like in the internet, this is done using the exterior Border Gateway Protocol (eBGP). Common software implementations for small scale purposes like the DN42 are Bird and Quagga. To establish a first connection the network, a participant will need a trusted peer that is willing to start peering with him. The connection to the peer is established using a VPN tunnel, usually OpenVPN or IPsec. Once the tunnel is established the participant configures his BIRD to start announcing the his subnet (

# Configure logging
log syslog { debug, trace, info, remote, warning, error, auth, fatal, bug };

# Override router ID
router id;

# This pseudo-protocol performs synchronization between BIRD's routing
# tables and the kernel. If your kernel supports multiple routing tables
# (as Linux 2.2.x does), you can run multiple instances of the kernel
# protocol and synchronize different kernel tables with different BIRD tables.
protocol kernel {
	scan time 20;		# Scan kernel routing table every 20 seconds
	import all;
	export where source != RTS_STATIC;

# This pseudo-protocol watches all interface up/down events.
protocol device {
	scan time 10;		# Scan interfaces every 10 seconds

Moving again

It’s been a long time since we moved to a new server, but now it’s finally moving time again. This means, no longer we are running Solaris but now we are back on Linux with KVM virtualisation.

There isn’t much to say about it, we are running standard KVM hosts with virtio for both network and block device access. Disks are mapped to LVM partitions on the host and we use Kernel Samepage Merging to overcommit memory. That being said, Solaris served us very well over the years and we are going to miss some of the Solaris capabilities. Zones + ZFS + Crossbow really worked well for us and it was really fun to work with a system that create bootenviroments and snapshots before upading.

It was easy to manage, easy get reallocate memory, space and create complex virtual networks. In the end we moved away not because we think Linux is better, but because there is much more software available for Linux (particularly some experimental stuff I want to get my hands on) and because Solaris went closed source again. It was too hard to get updates and up-to-date software in the end.

Anyway..this is just a short heads up that I am still blogging. More stuff coming up soon.

Probing PHP with Systemtap on Linux

DTrace is a dynamic tracing tool build by Sun Microsystems and is available for Solaris, MacOS and FreeBSD. It features a tracing language which can be used to probe certain “probing” points in kernel or userland. This can be very useful to gather statistics, etc. Linux comes with a separate solution called systemtap. It also features a tracing language and can probe both userland and kernel space. A few Linux distributions such as Fedora enable systemtap in their default kernel.

PHP introduced DTrace support with PHP 5.3, enabling probing points in the PHP executable that can be used to simplify probing of PHP applications without having to the PHP implementation details. We enabled probes on function calls, file compilation, exceptions and errors. But this has always been limited to the operating systems that support DTrace. With the popularity of DTrace, Systemap programmers decided to add a DTrace compatibility layer that allows to use DTrace probes as Systemtap probing points as well.

With my recent commit to the PHP 5.5 branch, we allow DTrace probes to be build on Linux, so people can use Systemtap to probe those userland probes.

To compile PHP with userland probes you need to obtain the PHP 5.5 from git:

$ git clone git:// php-src
$ cd php-src
$ git checkout PHP-5.5
Now build PHP with DTrace support. First we have to rebuild configure as we build directly from the repository. Make sure your Linux distribution comes with systemtap and uprobes support.

$ ./buildconf --force
$ ./configure --disable-all --enable-dtrace
$ make
After being done with building we can see if we found any probes:

$ stap -l 'process.provider("php").mark("*")' -c 'sapi/cli/php -i'
Let’s build us a short Systemtap script that counts the function calls of a specific function. we use the function-return and function-entry probes for that:

$ cat request.stp
global callcount;
probe process.provider("php").mark("function-entry") {
    callcount[user_string($arg1)] += 1;
probe end {
    printf("count : function\n");
    foreach (name in callcount) {
        printf("%5d : %s\n", callcount[name], name);
$ sudo stap -c 'sapi/cli/php test.php' request.stp
count : function
  100 : foo
  101 : bar

So that’s all. You can use systemtap now to probe your PHP. Hope you come up with some useful scripts. Share them!

Bookmarks Revisited Part II: Daily Bookmarking

It’s been a long time since I’ve written part I of the bookmarks revisited series. In the last two years, bookmarks changed a lot. They became part Mercurial’s core functionality and a lot of of tools became bookmark aware.

The current state of bookmarks

As of Mercurial 1.8 bookmarks are part of the Mercurials core. You don’t have to activate the extension anymore. Bookmarks are supported by every major Mercurial hosting platform. Commands like hg summary or hd id will display bookmark information. In addition, the push and pull mechanism changed. I will go into details about his Part III of the series.

It’s safe to say, due to it’s exposure, bookmarks became much more mature of the years. It’s time to take a look at how to use them.

Bookmark semantics

Bookmarks are pointers to commits. Think of it as a name for a specific commit. Unlike branches in Mercurial, bookmarks are not recorded in the changeset. They don’t have a history. If you delete them, they will be gone forever.

Bookmarks were initially designed for short living branches. I use them as such. It’s indeed possible to use them in different contexts, but I don’t do that. Please be aware, although they were initially intended to be similar to git branches, they often aren’t. They are not branches, they are bookmarks and they should be used like you would use a bookmark in a book. If you advance to the next site, you move the bookmark (or it gets moved).

A bookmark can be active. Only one bookmark can be active at any time, but it’s okay that no bookmark is active. If you have an active bookmark and you commit a new changeset, the bookmark will be moved to the commit. To set a bookmark active you have to update to the bookmark with hg update <name>. To unset, just update to the current revision with hg update ..

A bookmark can have a diverged markers. Bookmarks that are diverged will have a @NAME suffix. For example test@default. Diverged bookmarks are created during push and pull and will be described in Part III.


I should…

blog more. Open topics: DTrace Part II, Mercurial Bookmarks Part II.