|s i s t e m a o p e r a c i o n a l m a g n u x l i n u x||~/ · documentação · suporte · sobre|
The following are miscellaneous security guidelines that I couldn't seem to fit anywhere else:
Have your program check at least some of its assumptions before it uses them (e.g., at the beginning of the program). For example, if you depend on the ``sticky'' bit being set on a given directory, test it; such tests take little time and could prevent a serious problem. If you worry about the execution time of some tests on each call, at least perform the test at installation time, or even better at least perform the test on application start-up.
If you have a built-in scripting language, it may be possible for the language to set an environment variable which adversely affects the program invoking the script. Defend against this.
If you need a complex configuration language, make sure the language has a comment character and include a number of commented-out secure examples. Often '#' is used for commenting, meaning ``the rest of this line is a comment''.
If possible, don't create setuid or setgid root programs; make the user log in as root instead.
Sign your code. That way, others can check to see if what's available was what was sent.
Consider statically linking secure programs. This counters attacks on the dynamic link library mechanism by making sure that the secure programs don't use it. There are several downsides to this however. This is likely to increase disk and memory use (from multiple copies of the same routines). Even worse, it makes updating of libraries (e.g., for security vulnerabilities) more difficult - in most systems they won't be automatically updated and have to be tracked and implemented separately.
When reading over code, consider all the cases where a match is not made. For example, if there is a switch statement, what happens when none of the cases match? If there is an ``if'' statement, what happens when the condition is false?
Merely ``removing'' a file doesn't eliminate the file's data from a disk; on most systems this simply marks the content as ``deleted'' and makes it eligible for later reuse, and often data is at least temporarily stored in other places (such as memory, swap files, and temporary files). Indeed, against a determined attacker, writing over the data isn't enough. A classic paper on the problems of erasing magnetic media is Peter Gutmann's paper ``Secure Deletion of Data from Magnetic and Solid-State Memory''. A determined adversary can use other means, too, such as monitoring electromagnetic emissions from computers (military systems have to obey TEMPEST rules to overcome this) and/or surreptitious attacks (such as monitors hidden in keyboards).
When fixing a security vulnerability, consider adding a ``warning'' to detect and log an attempt to exploit the (now fixed) vulnerability. This will reduce the likelihood of an attack, especially if there's no way for an attacker to predetermine if the attack will work, since it exposes an attack in progress. This also suggests that exposing the version of a server program before authentication is usually a bad idea for security, since doing so makes it easy for an attacker to only use attacks that would work. Some programs make it possible for users to intentionally ``lie'' about their version, so that attackers will use the ``wrong attacks'' and be detected.