Friday, December 3, 2010

Script Injection, XSS and other kind of fun...


It truly surprises me that even after seeing loads of security holes in most of the famous web sites (I am looking at you, Twitter), how we neglect the importance of testing for these bugs.

It's simple folks. Very simple. It's no rocket science.

You got a text field in your web site? Simply try something like '< body onload="alert('hi')">' and submit.

Did you get a nice pop up ? You've got yourself a bug to fix.

Nothing frustrates me more than seeing a testing team spending loads of time writing test cases, documenting them and writing meaningless automated scripts, while they could spend few minutes of their time to find these kind of important bugs.

Look at the list of well known sites that have XSS holes : http://www.reddit.com/r/xss/top/?t=year


Sunday, July 18, 2010

SQL Injection - Beginner's guide

So much has been told, discussed and written about SQL Injection. But still a variety of applications are vulnerable for such attack.

If you are a tester, and have no clue how to go about testing for SQL Injection vulnerability, here are few tips.

Do you have access to the application's code?

If yes, run a full project search for SQL key words like 'select' , 'insert', 'update' and 'Delete'
If search returns any positive results, study that part of the code.

Are those SQL queries a part of production code? or just a test code?
Are those SQL queries parameterized ?
Do those SQL queries take user input directly ? If yes, is user input sanitized ?
Can you trace back that query to a field in application ? Either in UI or in services ?

So, if you do find direct SQL queries which are part of production code, taking user input with out parameterizing and sanitizing, boy o boy, you are in for a treat!

Now, fire up your favorite SQL profiler. Observe the queries that get executed in DB.

For simplicity sake, lets say an input field like username is vulnerable for SQL Injection.

Assuming the application works fine for a normal input ie,

john

Try following input for the same field :

john'; DELETE from users;--

In Microsoft SQL server, single quotes are used to note the text input, Semi colon denotes end of a SQL query, two hyphens used for commenting.

So, in above input, we are tying to 'inject' a SQL query of our own.

This may or may not work. Data in users table may or may not get deleted.

There are quite a few reasons why it won't work:

Our input might go through field length validation and get truncated
users table might have foreign key constraints
Logged in user account for SQL DB may not have privileges to drop a table
So on ...

So, you would have to study the relevant SQL query that got run in DB, (DB profiler is your friend), use variations of above input to get it working. I'll leave that to your imagination. :-)

In case if you have no access to application code and DB, then blind SQL Injection is your friend. Conceptually, it's similar idea.

Before you try SQL Injection, few words of caution :

Make sure you are on a test system where you can afford to screw up your data.
Make sure you are not affecting your fellow testers' hard work.
Make sure you have a latest DB copy saved somewhere else.

Happy testing!



Monday, March 22, 2010

When Should I start testing a story?

'When do you think tester should stop testing a requirement?'

A favorite question in the job interviews for many of the folks. But I've hardly heard any one asking 'When do you think Tester should start testing a requirement?'

Well, isn't that a silly question, you may say. I don't think that way, I think it's very important to know when to pick up a story to test.

Do I wait till a story is 'Dev Complete' ?
Do I wait till the story has gone through 'BA verified ' ?
Or do I wait till last responsible minute ?


I'd say, pick up a build as soon as small part of the story functionality is checked in.

Why?

Simple answer: Early bird gets all the bugs, pardon the pun :-)

On a serious note, earlier I start testing a story, more comfortable I am signing off that story in the end. But it comes with it's own set of trouble.

An intermediate build in the story development means there are chances of broken functionalities, surprise pop-ups saying 'In (insert new functionality) here ...' and other symptoms of 'dev in progress' code.

But, it also helps me in understanding the implementation of the story, things I should watch out for when the story comes to 'In QA' and to improve the general feel about the application itself.

So, I'd rather put up with testing the 'Dev In Progress' story than the story appearing magically to 'QA Ready'.

Everybody loves early feedback, just be careful not to be a pain in the neck for rest of the team.

:-)


Saturday, March 6, 2010

Stand Up Updates ...

A typical stand up update from a tester usually goes like this:

"Yesterday, I was testing story xyz, blah blah ... I found X number of bugs ... This part of the application appears to have problems, blah blah ... Later in the day, new build had some issues ... Test Environment seems to have so and so problems ... yadda, yadda ..."

Right?

Well, my stand up updates used to be in the same line as well. Also, if you happen to be the only tester, then it gets worse.

While chatting with a colleague of mine in ThoughtWorks, I realized that all I speak in stand up is about problems! Over a period of time, I'll sound like a moron who just ruins everyone's day, especially since we start our day with stand up.

So, along with the problems and issues, now I've started talking about stories which went through testing with no or minor bugs and appreciate the same. Also how I tried my best to break the system, but with no luck, or how I failed to inject scripts or SQL after trying very hard. Also, instead of individual bugs, I've started talking about patterns, themes in the bugs found.


Well, just my 2 cents to keep the world a better place.

:-)

Thursday, March 4, 2010

Oh That Innocent Looking Page !


So, you are building a web application that talks to bunch of other applications, shared databases and other web services which aren't in your control. Sounds like a typical project, isn't it ? :-)

Well, I was the (only) tester in such a project sometime back. Maintaining a working test environment was a pain. Broken connectivity between dependent applications, databases was a recurring problem. To make my life as well as environment team's life easy, developers came up with this idea.

A simple html page that shows the connection status of different web services, databases in test environment.

When it was introduced in mid release cycle, I was happy :-) Now I don't have to worry much when the transactions aren't going through. Just check the status page, if any connectivity is broken, shout across to Environment team, get the things fixed. Life is simple and worry free !

We decided to introduce the same status page to Production System as well, with limited access, only to environment team. After all, it made all of our lives easy.

So, moving on to 'Go Live' Day. We run the deploy scripts, I run the sanity tests, things were looking good.

In a matter of an hour or so, environment team got in touch with us, apparently production systems were receiving a huge volume of data. At first shot, it looked like our application is the originator of the traffic.

So, we had to take our application down (luckily it was on a weekend, not much work going on) while we debug the issue.

The way status page was supposed to work was to send a 'Are you up?' question to different systems and was expecting a 'Yes' or 'No' answer. But one of the shared library we used had a bug. Instead of sending a response, it sent back a request, for which our application promptly sent a response. This got into an infinite loop, hence the surge in the traffic. Well, the problem was quickly solved and we went home anyway, albeit little late.

Apparently same thing was happening in Test environment, but me being the only user, there wasn't much havoc. It went unnoticed.

This incident got me in to thinking.

You see, I, as a tester, am worried about the stories, functionality and bunch of other things like performance and security. I wouldn't doubt much about these small tools we use that make our work easy. Big mistake.

As a 'only' tester in my team, it's my responsibility to check every new thing that's being introduced in to the application. I should be on the top of the things, questioning and doubting every little piece of software that gets developed.

So, a lesson learnt: Never ever look at a piece of software and think it's harmless. Not even a simple innocent looking html page.

TL;DR : I goofed up in testing, learnt a lesson.

:-)

Tuesday, March 2, 2010

Think Like a Grandma and Find More Bugs !

Context:

Over the last few years of my experience as a tester in agile projects, I've experienced and observed this thing: whenever a new tester joins the team, he/she finds a bunch of new bugs. The rate of finding new bugs gradually decreases to normal level as the new tester spends more time in the team.

While I agree that I've overly generalized this, none the less it's a pattern that you can't ignore.

So I gave it some thoughts, and came up with this theory :-)

Think like grandma, find more bugs !

Let me brief on this a bit.

So, ever seen Grandma using the internet? Double clicking on links, opening a new instance of browser for each page and clicking on every ad that's on a web site, etc?

In similar lines, that's very much what a new tester does on application under test (AUT).

Initially when a new tester joins the team, his/ her understanding of the project is minimal. There's no 'acquired knowledge', no clue on how things should work. This leads to a set of new test scenarios which weren't thought of before. Hence the chance of finding new bugs increases.

This is all well, but over the time, the new tester becomes 'old, experienced' member of team. A lot of new knowledge, ideas on how things should work and ignorance towards 'non-conventional' scenarios develop. Me thinks this leads to a pattern where tester is worried only about the 'acceptance criteria', and chance of finding new bugs decreases.

Still with me? :P

So I spoke about this thought in an internal training in ThoughtWorks, while the idea it self isn't ground-breaking, there were a few questions.

How do we think like a newbie (Grandma) after spending considerable time in a project?

I think while it is a must have skill for an excellent tester to learn new things very quickly, I also demand that a good tester must be able to 'unlearn' things easily as well. This doesn't take much effort if you ask me. Just a constant reminder to yourself about ignoring knowledge of the AUT should do the trick.

Every time I take a new story to test, I tell myself that I am a brand new user who hasn't got a least idea what AUT should do. While it sounds foolish, also time consuming, it helped me a lot to find new test scenarios and hence, bugs.

Well, not every user is a Grandma user, isn't it?

Of course not. But for every level of proficiency in the AUT, there is a grandma level. What I mean is, your user might be a technically sound sysadmin, but there is a beginning for every one, isn't it? If your user is a well trained user, think yourself as a user who just came out of a 3 day theory course on how to use the system.

There is a Grandma level for every level of user. Just pretend you are one of them.



So, have you started using your AUT as a Grandma yet?

:)