Apache has an easy way password protect a folder, including root, therefore protecting the whole site.

This is what gets displayed in the Browser:

Apache needs to be configured so that when it receives a request to a protected directory it displays a login form. On submission it checks the details match those in a file. The file contains a user name and encrypted password.


  • Quick
  • Fairly easy


  • The form can't be styled - it takes on the OS/Browser appearance
  • Requires root access to the server or .htaccess enabled for vhosts


  1. Configure Apache
  2. Create the password file

1. Configure Apache

It goes without saying module mod_authn_file needs to on in Apache for this to work but it should be by default.

There's a couple of ways to set-up Apache Authentication but both require step 1, configuring Apache. Add the following to Apache config or a .htaccess file at the location to be protected.

# Protect directory
<Directory /var/www/website/folder-to-secure>
    <IfModule mod_authn_file.c>
    AuthType Basic
    AuthName "Protected area"
    AuthUserFile /var/www/.htpasswd
    Require valid-user

Make sure the paths to the protected folder and the location of htpasswd.file are correct. It's best to keep the password file above siteroot and/or start the filename with dot (period) so it's a hidden system file.

2. Create the password file

Option 1

Although it's easy to generate the file that contains the password, there's an online service to do it, Htpasswd Generator.

Simply add the details, download the file a place in the location specified in step 1.

The command-line tool might not be available on Windows.

Option 2

If you use Mac (Unix) or Linux this file can be generated using htpasswd program.

$ htpasswd -cb /full/path/to/file/.htpasswd username password

Where username is the username… wait for it… password is password.

Alternatively be prompted for a password:

$ htpasswd -c /full/path/to/file/.htpasswd username


-b Use batch mode; i.e., get the password from the command line rather than prompting for it.

-c Creates a new file and stores a record in it for user username.

Full details on htpasswd

htpasswd Apache details.

Or consult the manual in Terminal/Shell

$ man htpasswd

htaccess to password protect a specific server

If you use several environments for site: local, development, staging, production - here's a great gist from Jason Siffring: 'htaccess to password protect a specific server'.

Where's the JS coming from?

Sometimes you work on site where you didn't build any of it. The person who did isn't available but you need to do some updates. I had a recent project where jQuery had been used and I needed to change the JavaScript but I had no idea where it was coming from. With concatenated and minified files this can be even more of a headache.

Chrome's Dev Tools are excellent, and list the Event Listeners bound to a DOM element.

Chrome Dev Tools - Event Listeners

In this example I could see a click event had been bound to a link (OK, not that hard to know as I wanted to change it's click behaviour). The problem is jQuery is handling this click event so Chrome just lists jQuery as the source.

So jQuery is handling the click event? Gee, thanks!

Right clicking on the handler 'function' to 'Show Function Definition' doesn't get us anywhere either. It should go directly to the code, which it does; it shows a line of code in jQuery… not too helpful.

Show Function Definition, not that one!

At least give me a clue

With a bit of Googling I found the answer but it's tricky to know what to search on (I tried naming this post multiple times). Unsurprisingly it's in a Google forum, and Paul Irish is involved.

The full thread here. https://groups.google.com/d/msg/google-chrome-developer-tools/NTcIS15uigA/BU0nB78hK9AJ

So Dev Tools can't help us directly as it's not keeping track of what jQuery is doing. But jQuery can with _data(). It's a function for internal use only, but we can still access it, and it expects a vanilla JavaScript element:


A feature of Dev Tools Console is $0. It's a reference to the last element clicked on in the Elements DOM view. $1, is the 2nd last element clicked on etc. A history of clicks. With 3 bits of the puzzle; element, _data and 'Show Function Definition' we can view the source.


Output this to the console, fold open 'events', then 'click' and the raw code can be seen against 'handler'. It's short and simple in this case, but where's the code loaded from? Right-click and 'Show Function Definition'.

Show me the _data()

There it is in file, 'index.php', line 219.

Found it!

This code might have been pieced together via template partials in a CMS but it's a simple site-wide search to find it.

Timed Event

There's lots of times when building a website that you need something to happen but not from user interaction. As the web is stateless, unless someone is on your site and clicking around, your server side scripts won't run.

The usual reasons I need this to happen:

  • Send out a daily/weekly email report
  • At a set time change the status of content:
    • Set posts/entries to Draft
    • Delete posts/entries
  • Pull in data from somewhere else, like another DB or CMS

It is possible to a 'pretend' timed event. On every page load run a script that checks if a task needs to be done and run it. This is how WordPress wp cron works. A similar approach is used by Automat:ee for ExpressionEngine. If you get a lot of visits to your site then this could be OK.

The drawback is, if you need the task to run at an exact time it's dependant on someone requesting a page at that exact time! Another draw back is the timer is working through a server side language, eg with WordPress it's PHP (same for Drupal, ExpressionEngine, Craft, Perch etc). Every page load will be using a little bit of processing power. If you happen to be the one that triggers the task, and it's complex, you take the hit on the page load.

There is a (better) way to do this via Cron.

Unix Cron

If your website is running on a unix variant (most probably is) there's a built in process to handle any timed tasks, Cron.

Cron is a system daemon used to execute desired tasks (in the background) at designated times.

Although this is handled on the command line, it's very easy to set-up. Cron is always running and goes through any tasks set-up at set times. These tasks are configured in a list, so Cron isn't controlled directly. The list Cron reads is in crontab.

A crontab is a simple text file with a list of commands meant to be run at specified times.

The timings are extensive: every hour, every day, every Monday, every 3rd Friday of the month can all be set-up. Ubuntu has a great write-up on adding tasks to Cron.


Trigger a PHP script to run at midnight every day. The PHP file runs by requesting the 'page' like any normal webpage over HTTP. It makes a call to the database to dump out a backup.


The first task is to get on to the server. You'll need to have root access and hopefully connect over a secure connection.

$ ssh username@website.com

Enter password.


Open up crontab to edit:

$ crontab -e

This will put you in the default editor for your shell. On a Mac it's Vim. This will be your users crontab. There's some other things you could worry about, like the global crontab (possibly $ vim /etc/crontab) but the default is probably fine.

Crontab has a task per line in the format:

* * * * * task

Where the asterisks *, configure the time and task of what Cron will do.

All the timing events are in Ubuntu's guide.

In our example we want to 'visit' a link, but first lets set the task to run at midnight everyday.

0 0 * * * task

minute = 0, hour = 0 (midnight) and everything else is a wildcard so is always true. In effect run every day.


Wget is a terminal command to retrieve files using HTTP, HTTPS and FTP. To do this wget will visit and download a link. We want the first bit, but in this example not the download.

0 0 * * * wget http://www.website.com/backup-script.php

Wget expects to retrieve a file, so we need to tell it to delete whatever it gets back:

0 0 * * * wget -qO- http://www.website.com/backup-script.php &> /dev/null

What do the flags q and O for wget do and what's /dev/null?

Courtesy of Stack Overflow:

Use q flag for quiet mode, and tell wget to output to stdout with O- (uppercase o) and redirect to /dev/null to discard the output: wget -qO- $url &> /dev/null. > redirects application output (to a file). if > is preceded by ampersand, shell redirects all outputs (error and normal) to the file right of >. If you don't specify ampersand, then only normal output is redirected.

And that's it – set a timed script to run using Cron.

Update - mystery emails

After a system update I had a web server mysteriously start sending me emails that Cron had successfully run every day!

Honestly, I have no idea why this happened or how to configure Cron/OS to not email, or only email on error.

After a support call to my host, email notifications can be disabled per Cron task with:

> /dev/null 2>&1

Full explanation on Stack Overflow: http://stackoverflow.com/questions/10508843/what-is-dev-null-21

2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected