HowTo: Rotate Logs to S3

  Uncategorized

owTo: Rotate Logs to S3

This article will talk about how to use logrotate to rotate your logs to S3. Here we specifically are using Gentoo Linux, and we can find EC2 AMIs on the Gentoo in the Cloud page. We will be using s3cmd to actually move the files to S3,

Configuration

/etc/logrotate.conf

This is the original /etc/logrotate.conf file we are going to edit:

/etc/logrotate.conf (Original)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45# $Header: /var/cvsroot/gentoo-x86/app-admin/logrotate/files/logrotate.conf,v 1.5 2013/05/18 09:41:04 nimiux Exp $ # # Logrotate default configuration file for Gentoo Linux # # See “man logrotate” for details # rotate log files weekly weekly #daily # keep 4 weeks worth of backlogs rotate 4 # create new (empty) log files after rotating old ones create # use date as a suffix of the rotated file dateext # uncomment this if you want your log files compressed compress notifempty nomail noolddir # packages can drop log rotation information into this directory include /etc/logrotate.d # no packages own wtmp and btmp — we’ll rotate them here /var/log/wtmp { monthly create 0664 root utmp minsize 1M rotate 1 } /var/log/btmp { missingok monthly create 0600 root utmp rotate 1 } # system-specific logs may be also be configured here.

We want to have the log files rotate hourly, but that is not an option for the configuration file, so we will remove:

# rotate log files weekly
weekly
#daily

and setup a cron job to take care of doing the rotation.

We just want the latest hours worth of backlogs, because they will be uploaded to S3. So we will change

# keep 4 weeks worth of backlogs
rotate 4

to

# keep 1 hours worth of backlogs
rotate 1

The default date extension is not fine grained enough for hourly rotations, so after

# use date as a suffix of the rotated file
dateext

we will add

dateformat -%Y-%m-%d-%s

Compression happens after the postrotate script, which is when the upload occurs, and since we are not keeping a significant backlog of logs, then we just do nocompress. We will replace:

# uncomment this if you want your log files compressed
compress

with

# because compression would happen after upload
nocompress

This leads to our final /etc/logrotate.conf:

/etc/logrotate.conf

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43# $Header: /var/cvsroot/gentoo-x86/app-admin/logrotate/files/logrotate.conf,v 1.5 2013/05/18 09:41:04 nimiux Exp $ # # Logrotate default configuration file for Gentoo Linux # # See “man logrotate” for details size 1 # keep 1 hours worth of backlogs rotate 1 # create new (empty) log files after rotating old ones create # use date as a suffix of the rotated file dateext dateformat -%Y-%m-%d-%s # because compression would happen after upload nocompress # packages can drop log rotation information into this directory include /etc/logrotate.d notifempty nomail noolddir # no packages own lastlog or wtmp — we’ll rotate them here /var/log/wtmp { monthly create 0664 root utmp rotate 1 } /var/log/btmp { missingok monthly create 0600 root utmp rotate 1 } # system-specific logs may be also be configured here.

/etc/logrotate.d/syslog-ng

The syslog-ng logrotate config file doesn’t change except for the s3cmd command

/etc/logrotate.d/syslog-ng

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18# $Header: /var/cvsroot/gentoo-x86/app-admin/syslog-ng/files/syslog-ng.logrotate,v 1.3 2008/10/15 20:46:12 mr_bones_ Exp $ # # Syslog-ng logrotate snippet for Gentoo Linux # contributed by Michael Sterrett # /var/log/messages { missingok sharedscripts postrotate /etc/init.d/syslog-ng reload > /dev/null 2>&1 || true BUCKET=logging-bucket INSTANCE_ID=`curl –silent http://169.254.169.254/latest/meta-data/instance-id | sed -e “s/i-//”` /usr/bin/s3cmd -m text/plain sync /var/log/messages-* s3://${BUCKET}/${INSTANCE_ID}/var/log/ endscript }

In the example here, it is using the bucket, logging-bucket, change it to a bucket that you control.

And for INSTANCE_ID we are getting the instance id of the instance and dropping the i- off the front (e.g. i-12345678becomes 12345678). The reason for this is to have the logs of different instances spread across S3 partitions. Amazon S3 Performance Tips & Tricks gives a good writeup on the issue.

/etc/logrotate.d/apache2

The apache logrotate config file doesn’t change except for adding the s3cmd commands.

/etc/logrotate.d/apache2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21# Apache2 logrotate snipet for Gentoo Linux # Contributes by Chuck Short # /var/log/apache2/*log { missingok notifempty sharedscripts postrotate /etc/init.d/apache2 reload > /dev/null 2>&1 || true BUCKET=logging-bucket SITE=www.example.com INSTANCE_ID=`curl –silent http://169.254.169.254/latest/meta-data/instance-id` /usr/bin/s3cmd -m text/plain sync /var/log/apache2/access_log-* s3://${BUCKET}/apache/access_log/site=${SITE}/instance=${INSTANCE_ID}/ /usr/bin/s3cmd -m text/plain sync /var/log/apache2/error_log-* s3://${BUCKET}/apache/error_log/site=${SITE}/instance=${INSTANCE_ID}/ /usr/bin/s3cmd -m text/plain sync /var/log/apache2/ssl_access_log-* s3://${BUCKET}/apache/ssl_access_log/site=${SITE}/instance=${INSTANCE_ID}/ /usr/bin/s3cmd -m text/plain sync /var/log/apache2/ssl_error_log-* s3://${BUCKET}/apache/ssl_error_log/site=${SITE}/instance=${INSTANCE_ID}/ /usr/bin/s3cmd -m text/plain sync /var/log/apache2/ssl_request_log-* s3://${BUCKET}/apache/ssl_request_log/site=${SITE}/instance=${INSTANCE_ID}/ endscript }

This example will upload into the bucket, logging-bucket, and is for the site www.example.com. The bucket needs to be changed to a bucket you control, and the site should be changed to the site it is logging.

The reason for INSTANCE_ID is so if we have multiple web servers for the same site, they will not step on each others’ toes.

The purpose for having site= before the site and instance= before the instance is to make it easy for running a hive script against the logs.

Rotating Logs

Hourly

/etc/cron.hourly/logrotate.cron

The easy way to do this is to move logrotate.cron from /etc/cron.daily to /etc/cron.hourly

Console – user@hostname ~ $

1sudo mv /etc/cron.daily/logrotate.cron /etc/cron.hourly/logrotate.cron

And here we’ll look at what it actually does:

/etc/cron.hourly/logrotate.cron

1 2 3#! /bin/sh /usr/sbin/logrotate /etc/logrotate.conf

/etc/crontab

If we are trying to set this up on a system that doesn’t have /etc/cron.hourly or if we want to rotate on a different increment, we could instead edit the /etc/crontab file. This example shows will have it rotate hourly still.

/etc/crontab

1 2# Logrotate 0 * * * * root /usr/sbin/logrotate /etc/logrotate.conf

On Shutdown

When the system is shutting down, we want the current logs to be rotated to S3. On Gentoo, we can make an/etc/local.d script for this; other Linux distributions may need a different method for this.

/etc/local.d/logrotate.stop

1 2 3# /etc/local.d/logrotate.stop /usr/sbin/logrotate -f /etc/logrotate.conf

Dry Run

To make sure that our configuration files are set up correctly and things will do what we expect we can do a dry run like so:

Console – user@hostname ~ $

1sudo logrotate -dv /etc/logrotate.conf

Run Manually

We can manually run the rotate, and if the conditions for rotating the logs are met they will be rotated.

Console – user@hostname ~ $

1sudo logrotate /etc/logrotate.conf

Force Rotate

If we want the log to be rotated no matter what, we can force a rotate:

Console – user@hostname ~ $

1sudo logrotate -f /etc/logrotate.conf

IAM Policies

We need to make sure our IAM role or user has permissions to S3.

If we have a dedicated bucket

Dedicated Bucket Policy

1 2 3 4 5 6 7 8 9 10 11 12 13{ “Statement”:[{ “Effect”:”Allow”, “Action”:[“s3:*”], “Resource”:[“arn:aws:s3:::logging-bucket”, “arn:aws:s3:::logging-bucket/*”] }, { “Effect”:”Allow”, “Action”:[“s3:ListAllMyBuckets”], “Resource”:[“arn:aws:s3:::”] } ] }

If we have to share a bucket

Shared Bucket Policy

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18{ “Statement”:[{ “Effect”:”Allow”, “Action”:[“s3:PutObject”,”s3:GetObject”,”s3:GetObjectVersion”,”s3:DeleteObject”,”s3:DeleteObjectVersion”], “Resource”:”arn:aws:s3:::shared-bucket/logging/*” }, { “Effect”:”Allow”, “Action”:”s3:ListBucket”, “Resource”:”arn:aws:s3:::shared-bucket”, “Condition”:{ “StringLike”:{ “s3:prefix”:”logging/*” } } } ]

Views: 16

LEAVE A COMMENT

What is the capital of Egypt? ( Cairo )