Tuesday, July 10, 2007

SSHFS performance test

Some days ago, I made some tests on SSHFS.

Let's talk, for a moment, about SSHFS.

From SSHFS home page:

This is a filesystem client based on the SSH File Transfer Protocol. Since most SSH servers already support this protocol it is very easy to set up: i.e. on the server side there's nothing to do. On the client side mounting the filesystem is as easy as logging into the server with ssh.

The idea of sshfs was taken from the SSHFS filesystem distributed with LUFS, which I found very useful. There were some limitations of that codebase, so I rewrote it. Features of this implementation are:

* Based on FUSE (the best userspace filesystem framework for linux ;-)
* Multithreading: more than one request can be on it's way to the server
* Allowing large reads (max 64k)
* Caching directory contents


Basically, SSHFS is a FUSE (File System in User Space) module.

In order to install it on gentoo GNU/Linux:

# emerge -av sys-fs/sshfs-fuse


Let's mount a remote directory:

# sshfs user1@192.168.177.172:/home/user1 temp_dir


If you would umount the remote dir, you should use the command:

fusermount -u temp_dir


Uhm, amazing simple ;), but what about network performances?

The following diagrams figure out network performances:




The tests had performed on a 100MBit network with very low traffic.