I recall when people first proposed writing a read-only filesystem for an internal project at work. While I cannot talk much about what we have implemented, I can at least say it made and still makes sense to solve our problem by implementing a filesystem.
Filesystems can be written in many ways and their implementation specifics and problems they try to solve can range from easy to very hard. Filesystems such as BTRFS, ext4 or zfs take years or even decades to write and stabilize. They are implemented in kernel land, hard to debug and use sophisticated data structures for maximum performance and reliability. However with the addition of FUSE in the 2.6.14 Linux kernel and it’s subsequent porting to other POSIX operating systems such as FreeBSD, OpenBSD and MacOS, it became much easier to write and prototype filesystems. These filesystems are userland processes that talk to a simple kernel driver that redirects filesystems calls to the userland process when necessary. In our particular case, we didn’t need maximum performance, and we could heavily rely on kernel side caching, which made writing a fuse filesystem a reasonable choice.
In order to get a proof of concept, I decided to write a first version in Python. fusepy is a well known and stable implementation. Writing the initial prototype was a pleasure. Most things worked out of the box and we knew our idea works. However filesystems aren’t your standard piece of software and come with a set of performance characteristics that made fusepy and python a good choice for a prototype but a rather mediocre choice for a production-ready implementation. Most programs obtain data, do some processing and write the data somewhere. Most of their time is spend in userland running the program. Filesystems are slightly different. A program like ls will try to obtain the information of a list of files by issuing multiple, often hundreds of stat calls to the kernel, causing context switches into the kernel. Once the kernel figured out that we are a fuse filesystem it will hand the task of returning the file information to the fuse kernel module. The fuse module will then take a look into a cache and if necessary issue a request via IPC to the fuse userland daemon and subsequently to the userland program. At that point we already doubled our context-switches in contrast to a regular kernel-filesystem. Now that we spooled up a stat request in userland we try to return it with at least effort as possible (we still have 99 other requests to go) and return it back to the kernel, which then returns the result via the virtual filesystem layer back to the userland process which issued the request, effectively leading to 100% context-switch overhead to traditional kernel filesystems. To make things worse, python will also take a global interpreter lock in the meantime, allowing one request at a time to be served. There is also an overhead in calling a python function compared to a C function, which given the amount of requests, can add up significantly. In addition python will need to do some datastructure juggling. While this barely gives you a full picture as to why python is not necessarily a perfect choice for a production-ready implementation, it’s enough to say, it’s not optimal.
Choosing a language
Now that we have our prototype we want to get a production-ready version fast, but we also want to minimize our overhead to get good performance. At this point we want to consider a reimplementation of the python prototype in a language that is more suitable for high-performance systems programming. What are our common choices? C, C++, Go, Rust, Haskell and D are common competitors in the systems programming world. So let’s see…
Without a doubt C/C++ is reasonable choice. They have by far the best support for FUSE as libfuse is written in C. They are great for high-performance applications. However you will need to do your memory management yourself and deal with a class of potential errors which might not even exists in other languages such as a multitude of undefined behavior, memory management and certain types of overflows. In the end while we all agreed they are conservative reasonable choices we opted for languages which we believed get us faster to a working version.
Go seemed interesting but actually fell through quite fast upon realization that calling from C into Go can lead to a 10x overhead due to stack-rewriting. This is caused by Go not following C calling conventions and uses different stack layouts.
A great choice, but due to lack of expertise, ramp up time, etc we decided against it.
Certainly nowadays a great choice. The project I am talking about is a bit older and back then rust was not stable at all and could very well change behavior. So in addition to learning a new language we didn’t want to deal with a constant change of APIs and language features. These days I would certainly reconsider using it.
D and the good parts
As mentioned in earlier blogposts. D feels like Python in many ways but has C performance characteristics. In addition, it’s fairly mature, it uses C calling conventions and we had expertise in using it. It works well on MacOS and Linux which is all we cared (in fact D works pretty well on Windows too). So we went for it. We knew from the beginning we had to write a smaller wrapper around libfuse. D is very similar to C and C++ and really cares about interoperability. So writing a wrapper seemed to be an easy task and in fact within a few days we had a working version that grew as we needed. Once we started rewriting the python prototype on top of the new library, we were quite impressed how similar the languages seem to be and how fast we got to a working version. D’s string handling and it’s build in hashmaps and dynamic arrays made it a pleasure to work with.
The not-so-good parts
One thing we noted fairly quickly is that synchronization between the D wrapper and the C header files can be painful. If they change you will not notice unless you know it changed. The header files can diverge and miss some important calls. We had to add some wrappers in the standard library in order to get what we needed. Worst, if you make a mistake in the data layout your program will compile fine. However your program might randomly crash (that’s the good case) or just perform weird in rare cases as you might have written data to a wrongly aligned field and the C library will happy read whatever you write and take it as something completely else (I can tell you, it’s really fun when you get random results from your stat() syscall).
When porting to MacOSX it turned out that MacOS encodes the information if the operating systems supports 64-bit inodes in C header files and heavily depends on ifdefs around it which we could simply not port to D as it (luckily) doesn’t have a preprocessor. So we had to start creating unpleasant workarounds. Another issue we came along is D’s shared attribute. The basic idea that you have a transitive notion of shared data which can safely be shared across multiple threads. This works great as long as you assume that you have a thread-local store. The D compiler and the runtime supports all this by default, however it’s based on the assumption that the D runtime controls the threading and the data passed around…
A key difference
At this point it became apparent that there is a key difference between your standard D project and what we are trying to do. In most projects, your program is the main program and it calls into other libraries. In the FUSE world your program is wrapped in the libfuse context and only interacts with the world through callbacks you handed into libfuse. At that point you completely pass the control to fuse and your code only get’s called when fuse decides so. And worst, libfuse will also control the threading.
So back to shared. Well we initialize all our datastructures before libfuse starts threading, we then pass the datastructures around. At that point we either make everything shared and hope it somehow works, but that turned out to break half of the code due to incompatible qualifiers in the D libraries or we just go ahead and declare everything as __gshared and basically circumvent the whole shared business. De-facto everything our code is shared anyway. So if we have to use interfaces to that require shared qualifiers we just cast bakck and forth. That wasn’t really pretty and we started realizing that this is just the tip of the iceberg.
One bug came to mind. On a slightly unrelated note: we tried D’s contracts which caused the compiler to crash. I wasn’t necessarily impressed with discovering a compiler bug in a fairly straight forward code. But well, those things happen.
The thing with the runtime
At that point we had everything reimplemented and it worked like a charm. So we decided to enable multithreading in libfuse. Suddenly our program started to crash instantly. We rallied all our D expertise. People who have been working on D itself, it’s libraries, alternative compiler implementation etc and tried to debug what the heck is going on. We soon found out, disabling garbage collection made the problem go away. While that is “okay” for a short living program, it’s a not-so-okay workaround for a filesystem that potentially runs as long as your computer runs. So more digging into the Druntime and eventually after days we figured out that the druntime didn’t know about the threads the fuse library created. So when the garbage collector comes and tries to stop the world (as in, stop everything but the current thread) and perform the GC, it won’t know about the other threads but the current and stops nothing. So all the threads happily write data while the garbage collectors tries to run. BOOM!. However libfuse doesn’t tell you when and how many threads it will create. So how will you be able to tell the druntime about them? In the end upon every call we just check if we know about the current thread and if not attach it and then correctly detach it later. There were some MacOS specific problems around multithreading as osxfuse tried to join certain threads from time to time which the druntime considered to be attached. A long and tough deep-dive into OSXfuse and core.thread followed until eventually with some help from the D folks we figured out that we have to detach the thread and have to use pthread_cleanup to allow us to detach from the druntime.
In the end lessons were learned and in a more general notion, embedding a runtime, in particular one with a garbage collector into a project which get’s called from C/C++ contexts leads to a lot of “interesting” and mostly undocumented issues.
A final note
In the end I am not sure if I would choose D again. Modern C++14 is fairly nice to write and the amount of manual memory management in our case isn’t that bad. However now that we solved all the runtime issues and have a fairly stable version, we get back to the fun and turn around times which we like. So overall there is still a net win in choosing D.
Also D moves more and more towards a safe but gc-less memory model, which is something we for sure want to investigate once more and more of the ref-counting code and allocator code lands in upcoming D versions. I am still happy with D in general, but a bit less optimistic and more critical when choosing it then before, and particularly when writing a filesystem.