Linux has been multi tasking from the very beginning where a number
of programs interact and run continuously. It is therefore important
to keep a file structure that everyone can agree on so that the system
finds data where it expects to. Historically there has been so many
different standards that it was confusing and compatibility was
maintained using symbolic links which confused the issue even further
and the structure ended looking like a maze.
In the case of Linux a standard was fortunately agreed on early on
called the File Systems Standard (FSSTND) which today is used
by all main Linux distributions.
Later it was decided to make a successor that should also support
operating systems other than just Linux, called
the Filesystem Hierarchy Standard (FHS) at version 2.1 currently.
This standard is under continuous development and will
soon be adopted by Linux distributions.
I recommend not trying to roll your own structure as a lot of
thought has gone into the standards and many software packages
comply with the standards. Instead you can read more about this
FHS home page.
This HOWTO endeavours to comply with FSSTND
and will follow FHS when distributions become available.
The various parts of FSSTND have different requirements regarding
speed, reliability and size, for instance losing root is a pain
but can easily be recovered. Losing
/var/spool/mail is a
rather different issue. Here is a quick summary of some essential
parts and their properties and requirements. Note that this is
just a guide, there can be binaries in
lib directories, libraries in
and so on.
Maximum! Though if you rely too much on swap you
should consider buying some more RAM. Note, however, that on
many old Pentium PC motherboards the cache will not work on RAM above 128 MB.
Similar as for RAM. Quick and dirty algorithm:
just as for tea: 16 MB for the machine and 2 MB for each user. Smallest
kernel run in 1 MB but is tight, use 4 MB for general work and light
applications, 8 MB for X11 or GCC or 16 MB to be comfortable.
(The author is known to brew a rather powerful cuppa tea...)
Some suggest that swap space should be 1-2 times the size of the
RAM, pointing out that the locality of the programs determines how
effective your added swap space is. Note that using the same
algorithm as for 4BSD is slightly incorrect as Linux does not
allocate space for pages in core.
A more thorough approach is to consider swap space plus RAM as
your total working set, so if you know how much space you will
need at most, you subtract the physical RAM you have and that
is the swap space you will need.
There is also another reason to be generous when dimensioning
your swap space: memory leaks. Ill behaving programs that do not free
the memory they allocate for themselves are said to have a memory leak.
This allocation remains even after the offending program has stopped
so this is a source of memory consumption. Once all physical RAM and
swap space are exhausted the only solution is to reboot and start over.
Thankfully such programs are not too common but should you come across
one you will find that extra swap space will buy you extra time between
Also remember to take into account the type of programs you use.
Some programs that have large working sets, such as finite element
modeling (FEM) have huge data structures loaded in RAM rather than
working explicitly on disk files. Data and computing intensive
programs like this will cause excessive swapping if you have less
RAM than the requirements.
Other types of programs can lock their pages into RAM. This can be
for security reasons, preventing copies of data reaching a swap device
or for performance reasons such as in a real time module. Either way,
locking pages reduces the remaining amount of swappable memory and
can cause the system to swap earlier then otherwise expected.
man 8 mkswap it is explained that each swap partition can
be a maximum of just under 128 MB in size for 32-bit machines
and just under 256 MB for 64-bit machines.
This however changed with kernel 2.2.0 after which the limit is 2 GB.
The man page has been updated to reflect this change.
Medium. When it fails you know it pretty quickly and
failure will cost you some lost work. You save often, don't you?
- Note 1
Linux offers the possibility of interleaved swapping
across multiple devices, a feature that can gain you much. Check out
man 8 swapon" for more details. However, software raiding
swap across multiple devices adds more overheads than you gain.
/etc/fstab file might look like this:
Remember that the
/dev/sda1 swap swap pri=1 0 0
/dev/sdc1 swap swap pri=1 0 0
fstab file is very sensitive to the formatting
used, read the man page carefully and do not just cut and paste
the lines above.
- Note 2
Some people use a RAM disk for swapping or some other
file systems. However, unless you have some very unusual requirements
or setups you are unlikely to gain much from this as this cuts into
the memory available for caching and buffering.
- Note 2b
There is once exception: on a number of badly designed
motherboards the on board cache memory is not able to cache all the
RAM that can be addressed. Many older motherboards could accept 128 MB
RAM but only cache the lower 64 MB. In such cases it would improve the
performance if you used the upper (uncached) 64 MB RAM for RAMdisk
based swap or other temporary storage.
Temporary Storage (
Very high. On a separate disk/partition this will
reduce fragmentation generally, though
ext2fs handles fragmentation
Hard to tell, small systems are easy to run with just
a few MB but these are notorious hiding places for stashing files
away from prying eyes and quota enforcement and can grow without
control on larger machines. Suggested: small home machine: 8 MB,
large home machine: 32 MB, small server: 128 MB, and large
machines up to 500 MB (The machine used by the author at work has 1100
users and a 300 MB
/tmp directory). Keep an eye on these directories,
not only for hidden files but also for old files. Also be prepared that
these partitions might be the first reason you might have to resize
Low. Often programs will warn or fail gracefully when
these areas fail or are filled up. Random file errors will of course
be more serious, no matter what file area this is.
Mostly short files but there can be a huge number of
them. Normally programs delete their old
tmp files but if somehow an
interruption occurs they could survive. Many distributions have a policy
regarding cleaning out
tmp files at boot time, you might want to
check out what your setup is.
In FSSTND there is a note about putting
RAM disk. This, however, is not recommended for the same reasons
as stated for swap. Also, as noted earlier, do not use flash RAM
drives for these directories. One should also keep in mind that some
systems are set to automatically clean
tmp areas on rebooting.
Older systems had a
/usr/tmp but this is no longer
recommended and for historical reasons a symbolic link now makes it
point to one of the other
(* That was 50 lines, I am home and dry! *)
Spool Areas (
High, especially on large news servers. News transfer
and expiring are disk intensive and will benefit from fast drives.
Print spools: low. Consider RAID0 for news.
For news/mail servers: whatever you can afford. For
single user systems a few MB will be sufficient if you read
continuously. Joining a list server and taking a holiday is, on the
other hand, not a good idea. (Again the machine I use at work
has 100 MB reserved for the entire
Mail: very high, news: medium, print spool: low. If
your mail is very important (isn't it always?) consider RAID for
Usually a huge number of files that are around a few
KB in size. Files in the print spool can on the other hand be
few but quite sizable.
Some of the news documentation suggests putting all
.overview files on a drive separate from the news
files, check out all news FAQs for more information.
Typical size is about 3-10 percent of total news spool size.
Home Directories (
Medium. Although many programs use
/tmp for temporary
storage, others such as some news readers frequently update files in the
home directory which can be noticeable on large multiuser systems. For
small systems this is not a critical issue.
Tricky! On some systems people pay for storage so this
is usually then a question of finance. Large systems such as
(which is a free Internet service with mail, news and WWW services)
run successfully with a suggested limit of 100 KB per user and 300 KB as
enforced maximum. Commercial ISPs offer typically about 5 MB in their
standard subscription packages.
If however you are writing books or are doing design work the
requirements balloon quickly.
/home on a single user machine is
annoying but when 2000 users call you to tell you their home
directories are gone it is more than just annoying. For some their
livelihood relies on what is here. You do regular backups of course?
Equally tricky. The minimum setup for a single user
tends to be a dozen files, 0.5 - 5 KB in size. Project related files
can be huge though.
You might consider RAID for either speed or
reliability. If you want extremely high speed and reliability you
might be looking at other operating system and hardware platforms anyway.
(Fault tolerance etc.)
Web browsers often use a local cache to speed up browsing and
this cache can take up a substantial amount of space and cause much disk
activity. There are many ways of avoiding this kind of performance hits,
for more information see the sections on
Users often tend to use up all available space on the
/home partition. The Linux Quota subsystem is capable of
limiting the number of blocks and the number of inode a single user
ID can allocate on a per-filesystem basis. See the
Linux Quota mini-HOWTO by
Albert M.C. Tam
bertie (at) scn.org
for details on setup.
Main Binaries (
Low. Often data is bigger than the programs which are
demand loaded anyway so this is not speed critical. Witness the
successes of live file systems on CD ROM.
The sky is the limit but 200 MB should give you most of
what you want for a comprehensive system. A big system, for software
development or a multi purpose server should perhaps reserve 500 MB
both for installation and for growth.
Low. This is usually mounted under root where all
the essentials are collected. Nevertheless losing all the binaries is
Variable but usually of the order of 10 - 100 KB.
Medium. These are large chunks of data loaded often,
ranging from object files to fonts, all susceptible to bloating. Often
these are also loaded in their entirety and speed is of some use here.
Variable. This is for instance where word processors
store their immense font files. The few that have given me feedback on
this report about 70 MB in their various
A rather complete Debian 1.2 installation can take as much as
250 MB which can be taken as an realistic upper limit.
The following ones are some of the largest disk space consumers:
GCC, Emacs, TeX/LaTeX, X11 and perl.
Low. See point
Usually large with many of the order of 1 MB in size.
For historical reasons some programs keep executables in
the lib areas. One example is GCC which have some huge binaries in the
Quite low: after all booting doesn't happen that often
and loading the kernel is just a tiny fraction of the time it takes
to get the system up and running.
Quite small, a complete image with some extras
fit on a single floppy so 5 MB should be plenty.
High. See section below on Root.
- Note 1
The most important part about the Boot partition is that
on many systems it must reside below cylinder 1023. This is a
BIOS limitation that Linux cannot get around.
- Note 1a
The above is not necessarily true for recent IDE systems
and not for any SCSI disks. For more information check the latest
Large Disk HOWTO.
- Note 2
Recently a new boot loader has been written that overcomes
the 1023 sector limit. For more information check out this
Quite low: only the bare minimum is here, much of
which is only run at startup time.
Relatively small. However it is a good idea to keep
some essential rescue files and utilities on the root partition and
some keep several kernel versions. Feedback suggests about 20 MB would
High. A failure here will possibly cause a fair bit
of grief and you might end up spending some time rescuing your boot
partition. With some practice you can of course do this in an hour or
so, but I would think if you have some practice doing this you are
also doing something wrong.
Naturally you do have a rescue disk? Of course this is updated since
you did your initial installation? There are many ready made rescue
disks as well as rescue disk creation tools you might find valuable.
Presumably investing some time in this saves you from becoming a
root rescue expert.
- Note 1
If you have plenty of drives you might consider putting
a spare emergency boot partition on a separate physical drive. It will
cost you a little bit of space but if your setup is huge the time saved,
should something fail, will be well worth the extra space.
- Note 2
For simplicity and also in case of emergencies
it is not advisable to put the root partition on a RAID level 0 system.
Also if you use RAID for your boot partition you have to remember to
md option turned on for your emergency kernel.
- Note 3
For simplicity it is quite common to keep Boot and Root
on the same partition. If you do that, then
in order to boot from LILO it is important that the
essential boot files reside wholly within cylinder 1023. This includes
the kernel as well as files found in
At the danger of sounding heretical I have included this little section
about something many reading this document have strong feelings about.
Unfortunately many hardware items come with setup and maintenance tools
based around those systems, so here goes.
Very low. The systems in question are not famed for speed
so there is little point in using prime quality drives. Multitasking or
multi-threading are not available so the command queueing facility found
in SCSI drives will not be taken advantage of. If you have an old IDE
drive it should be good enough. The exception is to some degree Win95
and more notably NT which have multi-threading support which should
theoretically be able to take advantage of the more advanced features
offered by SCSI devices.
The company behind these operating systems
is not famed for writing tight
code so you have to be prepared to spend a few tens of MB depending on
what version you install of the OS or Windows. With an old version of
DOS or Windows you might fit it all in on 50 MB.
Ha-ha. As the chain is no stronger than the weakest link
you can use any old drive. Since the OS is more likely to scramble itself
than the drive is likely to self destruct you will soon learn the
importance of keeping backups here.
Put another way: "Your mission, should you choose to accept it,
is to keep this partition working. The warranty will self destruct
in 10 seconds..."
Recently I was asked to justify my claims here. First of all I am not
calling DOS and Windows sorry excuses for operating systems. Secondly
there are various legal issues to be taken into account. Saying there
is a connection between the last two sentences are merely the ravings of the
paranoid. Surely. Instead I shall offer the esteemed reader a few
key words: DOS 4.0, DOS 6.x and various drive compression tools that
shall remain nameless.
Naturally the faster the better but often the happy installer of Linux
has several disks of varying speed and reliability so even though this
document describes performance as 'fast' and 'slow' it is just a rough
guide since no finer granularity is feasible. Even so there are a few
details that should be kept in mind:
This is really a rather woolly mix of several terms: CPU load,
transfer setup overhead, disk seek time and transfer rate. It is in
the very nature of tuning that there is no fixed optimum, and in most
cases price is the dictating factor. CPU load is only significant for
IDE systems where the CPU does the transfer itself
but is generally low for SCSI, see SCSI documentation
for actual numbers. Disk seek time is also small, usually in the
millisecond range. This however is not a problem if you use command
queueing on SCSI where you then overlap commands keeping the bus busy
all the time. News spools are a special case consisting of a huge
number of normally small files so in this case seek time can become
There are two main parameters that are of interest here:
is usually specified in the average time take for the
read/write head to seek from one track to another. This parameter
is important when dealing with a large number of small files such
as found in spool files.
There is also the extra seek delay before the desired sector rotates
into position under the head. This delay is dependent on the angular
velocity of the drive which is why this parameter quite often is
quoted for a drive. Common values are 4500, 5400 and 7200 RPM (rotations
per minute). Higher RPM reduces the seek time but at a substantial cost.
Also drives working at 7200 RPM have been known to be noisy and to
generate a lot of heat, a factor that should be kept in mind if you
are building a large array or "disk farm". Very recently drives working
at 10000 RPM has entered the market and here the cooling requirements
are even stricter and minimum figures for air flow are given.
is usually specified in megabytes per second.
This parameter is important when handling large files that
have to be transferred. Library files, dictionaries and image files
are examples of this. Drives featuring a high rotation speed also
normally have fast transfers as transfer speed is proportional to
angular velocity for the same sector density.
It is therefore important to read the specifications for the drives
very carefully, and note that the maximum transfer speed quite often
is quoted for transfers out of the on board cache (burst speed)
directly from the platter (sustained speed).
See also section on
Power and Heating.
Naturally no-one would want low reliability disks but one might be
better off regarding old disks as unreliable. Also for RAID purposes
(See the relevant information) it is suggested to use a mixed set of disks
so that simultaneous disk crashes become less likely.
So far I have had only one report of total file system failure but
here unstable hardware seemed to be the cause of the problems.
Disks are cheap these days yet people still underestimate the
value of the contents of the drives. If you need higher reliability
make sure you replace old drives and keep spares. It is not unusual
that drives can work more or less continuous for years and years but
what often kills a drive in the end is power cycling.
The average file size is important in order to decide the most
suitable drive parameters. A large number of small files makes the
average seek time important whereas for big files the transfer speed
is more important. The command queueing in SCSI devices is very
handy for handling large numbers of small files, but for transfer EIDE
is not too far behind SCSI and normally much cheaper than SCSI.