Linux Security Best Practices

  Uncategorized

Introduction

Security is crucial to any environment whether the systems are running in a local office or a remote data center. It is also important to note that deploying to a cloud service does not eliminate the need to be concerned about security. An ever increasing number of systems are being compromised everyday. Adhering to good security practices is a first step in protecting your servers and data.

Enable Firewall

Some Linux distributions, such as Ubuntu, do not enable the local firewall by default. This can pose a significant security risk if the server is directly connected to the public Internet. It is suggested the local Linux firewall, iptables, always be enabled.

The following command will show if any rules are currently loaded.

iptables -L -n

To enable the local firewall on Ubuntu, first add a rule to allow remote SSH connectivity.

ufw allow 22/tcp

The firewall can then be enabled.

ufw enable

CentOS and Red Hat Enterprise Linux (RHEL) have the iptables firewall enabled as well as a rule allowing SSH connectivity by default.

Even with the local Linux firewall rules in place, it is still advisable to route all public network traffic through centralized hardware (or software) firewall. Local operating system security is never a suitable replacement for solid network level security.

If possible, always restrict firewall rules to only hosts and network segments requiring access.

Restrict Root SSH Access

It is recommended that remote root access over SSH be disabled. This prevents a brute force attack from compromising the root user and encourages users to log into the server with their own individual user accounts. User accountability can be invaluable.

To restrict remote root access, first open the SSH server configuration file with your preferred text editor.

vi /etc/ssh/sshd_config

Uncomment the line with PermitRootLogin and set the value to no.

PermitRootLogin no

Security can be further improved by explicitly restricting which users or groups can log into the server over SSH. This can be accomplished by adding the AllowUsers or AllowGroups keywords to the sshd_config file followed by a list of user accounts separated by spaces.

AllowUsers user1 user2

Remember to restart the SSH service for the changes to take affect.

service sshd restart

SSH Key Authentication

Most users rely on username and password credentials to establish authentication when connecting to a host. A strict password policy can help insure relatively strong passwords are used, but SSH keys can be used to improve security further.

First generate a public and private SSH keypair on the client computer.

ssh-keygen -t rsa -b 4096

While setting a passphrase on the private key is optional, it is highly advised. The public SSH key can now be copied to the server.

ssh-copy-id [email protected]

This will append the public key to the ~/.ssh/authorized_keys file on the SSH server.

Note: It is important that the ~/.ssh directory and related files have strict file permissions and proper ownership.

Enable SELinux

Most modern Linux distributions have Security-Enhanced Linux (SELinux) enabled by default. This can be confirmed with the following command.

sestatus

Service management can be more complicated with SELinux enabled, however, SELinux should not be disabled due to inconvenience. There are many online resources explaining how to manage services while SELinux is enabled.

Apply Software Updates

It is always important to keep your software updated. This reduces the likelihood that a program or service will be exploited due to an unforeseen bug.

YUM is used to update the packages on a CentOS or RHEL distribution.

yum -y update

The APT package management utility is used to update a Debian or Ubuntu based distribution.

apt-get update && apt-get upgrade

Any running services that are updated should be restarted. A system reboot will guarantee all necessary services are restarted and apply any kernel updates.

You should also regularly review any software running on the server that was not installed using the distribution package manager. A good example might include the Oracle Java Runtime Environment or Development Kit. This also includes applications that use third-party plugins such as WordPress, Drupal, Joomla, Elasticsearch, etc. Poorly maintained or out-of-date plugins are frequently responsible for compromised servers.

Disable Unnecessary Services

Any services not being actively used should be disabled. It is even better to entirely remove or uninstall the software when possible. When installing a new distribution of Linux, it is usually a good idea to start with the minimal install option and add only the software packages needed.

Currently running processes can be viewed with the ps command.

ps -auxww

The netstat command will show which processes are bound to which local IP addresses and ports.

netstat -tulpn

The chkconfig command will show which services are enabled on boot in CentOS and Red Hat Enterprise Linux.

chkconfig --list | grep on

Unnecessary services can be stopped once identified.

service nginx stop

The service should be prevented from starting the next time the system boots.

CentOS 6 or Red Hat Enterprise Linux 6:

chkconfig nginx off

CentOS 7 or Red Hat Enterprise Linux 7:

systemctl disable nginx

Ubuntu or Debian:

update-rc.d nginx disable

Create Users for People and Services

A user should never log into a server directly as the root user. Each user logging into the server should be required to use their own individual user account.

This will help prevent a user or application from unintentionally performing a dangerous task that can affect the entire system. It is recommended that sudo or other tools be used to temporarily elevate a user’s permissions when performing administrative tasks.

In addition to reducing unintentional actions, requiring users to log into a server with their own user offers a level of accountability by providing an audit trail. It becomes much easier to identify which users are currently logged into the system or have recently been logged into the system.

  • who and w will show which users are currently logged into the system.
  • last will show users previously logged into the system along with their login time and duration.

User connection history and use of sudo are also logged to /var/log/secure.

It is also recommended that services run as their own non-privileged system user. This will reduce the impact the service can have on the system in case of a bug or exploit. The following useradd command option will create a system account that is locked by default.

-r, --system                  create a system account

The system user shell can also be set to /sbin/nologin to prevent external access.

-s, --shell SHELL             login shell of the new account

Process Isolation

Isolating processes can reduce the likelihood of a compromised process affecting the entire environment. Process isolation can be achieved by spreading services across several physical servers, virtual machines, or within a single environment using chroot or Linux Containers (LXC).

LXC allows for operating system-level virtualization. The operating system kernel manages each process, or process groups, within an isolated user space.

LXC has been gaining popularity through projects such as DockerCoreOS and Rocket+App Container which offer automation and management frameworks built on LXC.

It is also worth noting that further isolation can be achieved by separating hosts across network subnets. Web servers needing public Internet access do not likely need to share the same exposed subnet with backend database servers. The backend servers should reside on a private subnet with extra restrictions in place.

Encrypt Network Traffic

Any sensitive data sent over the public Internet should be encrypted. SSH is the most common method of remotely managing Linux a server, but a VPN can be used to further secure an environment. Linux supports a variety of VPN options including L2TP/IPSec and SSL/TLS VPNs.

When services (HTTP, FTP, and SMTP, for example) are exposed to the public Internet, the service traffic should always be encrypted by using SSL/TLS to protect sensitive data.

Note: SSL (Secure Sockets Layer) is the predecessor of TLS (Transport Layer Security). Historically, SSL versions progressed to version 3.0. TLS version 1.0 was then released as an upgrade to SSL 3.0. Modern references to SSL or SSL/TLS typically refer to the latest version of TLS.

SSL/TLS utilize a public key infrastructure, or PKI, to encrypt data. Nearly all modern services provide basic support for SSL/TLS. In the rare circumstance where native SSL/TLS is not provided, an intermediate service can be used to handle encryption and proxy requests onto a protected node that has no direct public access. Here is a list of various services that can act as a secure proxy.

  • Stunnel is a popular SSL/TLS tunneling service.
  • Both Nginx or Apache web servers support TLS and can act as proxies.
  • Stunnel or Stud can be used with the popular open source load balancer, HAProxy.
  • While not ideal for persistent services, OpenSSH can also be used as a tunnel and proxy.

SSL keys that are 2048 bit and above are considered secure. Key bit lengths of 1024 bits or less should always be avoided when generating an SSL key. Larger keys can impact performance and response time as they take longer to process, but modern processors are typically up to the task.

Here are the steps to generate a 2048 bit private key using OpenSSL.

openssl genrsa -aes128 -out server.key 2048

A Certificate Signing Request (CSR) can be generated using the above private key. The CSR is then usually sent onto a Certificate Authority (CA) to sign and return a public SSL certificate.

openssl req -new -key server.key -out server.csr

A self-signed public SSL certificate can also be created from the private key.

openssl req -new -key server.key -x509 -days 365 -out server.crt

Monitor Server Logs

Log files are crucial when it comes to security. Not only are they useful when troubleshooting service issues, they are also necessary to identify if a system is being attacked or when and how a system had been compromised. Most Linux logs are located under the /var/log directory, but some services may store their logs in their own data directory.

The logs should be monitored regularly and also stored off the server to preserve their integrity. Luckily, there are a number of software solutions available to assist with these tasks.

  • Rsyslog – A modern and versatile logging daemon included with most Linux distributions that can send logs to another rsyslog host or database.
  • Logwatch – A log analysis tool provided with most Linux distributions that will email a daily log summary.
  • logstash – An open source tool for managing events and logs and part of Elasticsearch project.
  • Graylog – Another popular open source solution for managing log files.

Log files are only reliable if the logged data is safe. A compromised server can easily go unnoticed or it can be very difficult to identify the source of an exploit if the logs have been purged or altered. Therefore, it is recommended that logged data be stored off the server that is generating the logs.

Please review this tutorial on how to configure remote logging with rsyslog.

Views: 0

LEAVE A COMMENT

What is the capital of Egypt? ( Cairo )