This is the variable that you manipulate in experimental designs. For example, if I wanted to know whether people feel happier after eating different types of chocolate, my manipulation would be the types of chocolate given to the participants. I might give people white chocolate, milk chocolate and plain chocolate to eat. In this example my independent variable would be the type of chocolate eaten.
This is the measure that we are interested in. We are expecting it to change somehow in response to the independent variable. For our chocolate example, the dependent variable is the happiness rating given by the participant after eating the chocolate.
This is a variable that we do not manipulate or measure, but one that is still likely to somehow influence our findings. For our chocolate example, the sex of the participant may be a confound (females may like chocolate more). Confounding variables can be measured, and in some advanced method of analysis, controlled for. A confounding variable is also sometimes known as a control variable or covariate.
This is an experimental design where different (or independent) people take part in the different conditions of your experiment. So in the chocolate example, one group of people would have eaten white chocolate, a separate group of people would have eaten milk chocolate and a third group of people would have eaten plain chocolate. An even better example of an independent measure design would be a study looking at sex differences: the males and females have to be two independent groups of people! Independent measures design is also sometime known as between subjects design.
This is where the same set of participants repeatedly take part in each of the experimental conditions. So in the chocolate example, each person would have had all three type of chocolate and rated how happy they felt after eating each type of chocolate. Repeated measures design is also sometimes known as within subjects design.
This is the statistical analysis that you would conduct if you wanted to see if there was a difference in scores between two conditions. This might be either and independent measures design or a repeated measures design.
This is the statistical analysis that you would conduct if you wanted to see if there was a difference in scores between three or more conditions. This might be either and independent measures design or a repeated measures design.
In a basic ANOVA you look at differences across conditions where one variable has been manipulated, such as whether participants eat white, milk or plain chocolate. In a factorial ANOVA you can manipulate more than one independent variable. So you may also want to consider whether the amount of chocolate consumed influences happiness. You now have two independent variables: type of chocolate (white, milk or plain) and amount of chocolate (square, bar or bucket). With a factorial ANOVA you can see whether there are differences in happiness according to the type of amount of chocolate eaten, but also whether there is an interaction between the two independent variables.
This is an analysis of covariance. It is just like a normal ANOVA, but it allows you to control for any variance in happiness rating that can be explained by a control variable, such as the sex of the participant.
This is a multiple ANOVA where more than one dependent variable can be analysed. So rather than just analysing happiness ratings, you might also want to analyse calmness and hunger ratings.
Usually in correlational studies we are trying to use some variables to predict what is going to happen in another variable. For example, we might want to know whether playing computer games, doing physical exercise and the age and sex of a child can predict how aggressive they are. The computer playing, physical exercise, age and sex would be the predictor variables in this study.
This is the variable that you are trying to predict in a correlational study. In our example, the child’s aggressiveness is the outcome variable as it is the variable that we are trying to predict.
A correlation simply looks at whether there is a relationship between two variables. It gives us two important pieces of information: whether the relationship is positive or negative and whether that relationship is significant.
A linear regression is like a correlation in that it looks at the relationship between two variables, but it goes a little further because you can use it as a predictive tool. So you might want to use computer game playing to predict aggressiveness in children.
This method of analysis allows you to consider a number of variables as predictors of an outcome variable. So you might want to know whether playing computer games, doing physical exercise and the age and sex of a child can predict how aggressive they are.
When you have a large number of predictor variables, there are different ways you can build the regression model to look at these different predictors: simultaneous, stepwise or hierarchical (also sometimes known as blocked regression).
Usually in regression you can only include continuous or binary variables as predictors in a multiple regression model. If you have a categorical variable, such as which after school club the child takes part in (football, rugby, chess, ballet or none) you need dummy variables to include these variables in the regression model.
Just like a normal regression, but this time the outcome variable is binary, such a male or female, left or right handed, guilty or not guilty.
This is the main method of analysis used to analyse data collected from questionnaires. It can look at a large number of questions and then see which questions “belong together” on the basis of participants providing similar responses to those questions. For example, on a personality questionnaire, there might be a number of items that ask about extroversion. A factor analysis would clump these questions together and provide a summary score for each person.
This refers to the consistency of the measure. You want, for example, a personality questionnaire, to provide the same personality trait scores if they were to repeat the questionnaire at a later date.
This refers to the idea that a measure is really measuring what it claims to be measuring. So, for example, when you fill in an online “IQ test”, does this test really tell you anything about how intelligent you are, or does is tell you something slightly different?
Frequency data tells you how many people belong to a particular category. For example, you might want to know whether each person in a group is left handed or right handed. This would tell you the frequency of left handedness and right handedness.
This is the way that frequency data is analysed. In the handedness example, if you had 100 people and handedness were random, you would expect 50 people to be left handed and 50 people to be right handed. In a chi squared analysis, you can see whether the observed frequencies are significantly different from what you would expect by chance.