Users and passwords – what can an Admin do to help?

User passwords vs. administrator passwords

A general user account can get away with a less complex password than an administrator; an admin account should be longer and more random in nature. Since administrators have more powerful accounts, they are a greater target for some attacks. This is not to say that a general user is not a target. They always are, as they tend to have weaker passwords.

Always make any regular password at least 8 to 10 characters long, include upper and lower case letters, as well as numbers and punctuation. Administrative passwords should be longer, more complex, and changed often.

Password aging

Different companies have different business rules for how long a password can ‘live’. This should be different for different types of accounts. Getting general users to change their passwords monthly is only going to encourage weak passwords (tammy1, tammy2, tammy3) as they try to circumvent the extra work of creating and remembering new passwords all the time.

Administrator accounts, however, have more responsibility, and therefore should be changed more often.

When an account password is aging, it is always a good idea to have a grace period. If you log in during a meeting and your system says it’s time to change your password (NOW!) you are not going to have the time or frame of mind to come up with a decent, memorable password. You might type in some jibberish and write it down, and swear under your breath that this always happends at a bad time, as you have a meeting to run.

A grace period of 3 to 5 log ins will fix that. The user knows that it’s time to start thinking of a new password, but they are given some time to think about it.

Give passwords to users or let them make their own?
Do not reuse passwords

Your system should track user passwords, and not let a password ever be reused once it expires. In addition, it should track for variations that are too similar. For example, if an account had T0my2Tone for a password that expired, the would not be allowed to use T0my2Tone2000 later, as it is too similar.

No identifiable information in a password

Passwords should never be based on anything that is related to them. They should not use anything like their account or server name, username, personal information

Longer passpharses

Another approach that some people take is a passphrase. You might choose a sentence like Happy Birthday, Mr. President and then mess it up a little bit with a date or something like H@pyB1rthDAY,_mr.PREsident!-19May1962.

Other resources:

RSS: What it is, and how to use it

RSS (Really Simple Syndication) is a system of sending and receiving updates and other information from a central source to many users.

Very often, computer users find themselves regularly visiting the same sites. These might be news sites, blogs, forums, web mail, or something else. Usually, this requires redirecting a browser to each sites, then browsing the contents of that site.

The fundamental idea of RSS is to simplify this process by making the user’s computer collect all the updates from the user’s favourite sites in one place. That ‘place’ is a program on the user’s computer, called an RSS feed aggregator or feed reader.

Confusion about rel=”nofollow” links, robots.txt files, and robots meta tags

It seems that some people are getting mixed signals about the difference between using the attribute/value pair of rel="nofollow" anchor links, disallow from robots.txt files, and the robots meta tags.

I’ll try to give an explanation with some examples to help clear the difference up.

Meta Tags

Those webmasters who have been using a robots meta tag know that if you tell a compliant (considerate?) spider or robot to ‘nofollow’ it means they should not follow any links that you have on your page. The meta tag goes in the head of your web page and might look something like this:

<meta name="robots" content="nofollow" />

You can take it a step further and ask the spider to not even index your page at all:

<meta name="robots" content="noindex, nofollow" />

You can indicate that you would like to be indexed or have your links followed, or not, or any combination. For example, these are all valid:

<meta name="robots" content="index, follow" />

<meta name="robots" content="noindex, follow" />

<meta name="robots" content="index, nofollow" />

<meta name="robots" content="noindex, nofollow" />

This is done on a page-by-page basis. In other words, each Web page would have a meta tag in the head of the document that might look something like this:

	<title>Some page on the Web</title>
	<meta name="robots" content="noindex, nofollow" />

Note that you are indicating your wishes here, and that robot spiders may or may not listen to your request.

There are other attribute values you can use. See the links for more reading.


You can control how search spiders and robots index your site (or parts of it) by using an ASCII-encoded text (not HTML) file called robots.txt (case sensitive) in the root directory of your Web server.

This plain text file can define some simple guidelines for robots to use. For example, if you ask all robots (identified by a wildcard character of *) to not index your site at all (everything from the root of your server: /), your text file would look like this:

User-agent: *
Disallow: /

If you wanted all robots to index everything, you might try this:

User-agent: *
Allow: /

You could single out a single robot and ask it to do something link this:

User-agent: Googlebot
Disallow: /admin/

You can have several different rules for different robots. Again, not all robots will follow your requests.


Here is where some of the confusion starts. Some people think that when you have a link on a page to another page, and you use the rel="nofollow" attribute/value pair, that search engine spiders will not follow this link.

Considering the name of the value (nofollow), plus the behaviour of the robots meta tag with nofollow, this seems like a logical assumption. However, it is false. Here’s why…

Back in 2005, several large search engines agreed that comment spam (comments in blogs, forums, etc with links to Web sites that existed only to drive traffic and were not really there are legitimate comments or links) was a serious problem. They came up with a plan to add an attribute to the (X)HTML anchor tag to help describe links that the site owner could not verify as being approved.

So, a normal link might look like this:

<a href=""></a>

but if it was put there by a user in a comment block, the software could alter it to look like this:

<a href= rel="nofollow"></a>

As links are often counted as part of the ranking of Web sites by search engines, the more links that link spammers can have their scripts automatically put in comment blocks, the more popular their sites would become in the search engine result pages (SERPs). The idea is that if a search engine spider sees a nofollow link, it will not use it for ranking algorithms. This does not mean that the spider will not follow the link and index the destination page, it just means that it won’t help with that page’s rank.

So that’s the theory. What happens in real life? That depends on the players in the game.

Yahoo, Microsoft, and Google all initially agreed in 2005 to respect this attribute with their spiders. Ask and several other search sites seem to be aware of it, too. The trick is that they are not all doing the same thing with it.

Some sites do not follow the link or index the destination page at all. Other spiders seem to follow the link and index the page, but not count it towards the rankings, while others seem blissfully unaware that it even exists and ignore the attribute entirely.

The end result is that, with all three of these tools, you are only giving your wishes and you have no guarantee that they will be followed.

Personally, the comment spam was so bad on this blog that I had to disable comments entirely.