Back to Top

Multivariate and Bivariate

Multivariate and Bivariate

Introduction to Multivariate and Bivariate Analysis
When conducting research, analysts attempt to measure cause and effect to draw conclusions among variables. For example, in order to test whether a drug can reduce appetite, researchers give participants a dose of the drug before each meal. The independent variable (or predictor) is the taking of the drug and appetite is the dependent variable (or outcome). The independent variable is the variable you manipulate in the study. The dependent variable is the variable you measure (appetite, for example).

One group takes the drug before each meal and a control group does not take drugs at all. After several days, the researchers note that the drug-takers have reduced their caloric intake voluntarily by 30%. Researchers now know that regular consumption of the drug reduces appetite. This type of study is called a univariate study because it examines the effect of the independent variable (drug use) on a single dependent variable (appetite).


Bivariate Analysis
Bivariate studies are different from univariate studies because it allows the researcher to analyze the relationship between two variables (often denoted as X, Y) ins order to test simple hypotheses of association and causality. For example, if you wanted to know whether there is a relationship between the number of students in an engineering classroom (independent variable) and their grades in that subject (dependent variable), you would use bivariate analysis since it measures two elements based on the observation of data.

There are essentially four steps to conducting bivariate analysis as follows:


Step 1: Define the nature of the relationship.
For example, if you were testing the relationship of class size and grades in an engineering class, then you would report the following: “The data show a relationship between class size and grades. Smaller class sizes (20 or less students) have a grade point average of 4,4 whereas larger class sizes (21-100 students) have a grade point average of 3,1. This demonstrates that students in smaller classes earn grades that are 30% higher than those in large classes.”


Step 2: Identify the type and direction of the relationship
In order to determine the type and direction of the relationship you must determine which of the four levels of measurement you will use for your data:

Nominal, which is non-numerical and places an object within a category (ex. male or female)
Ordinal, which ranks data from lowest to highest, 3) interval, which indicates the distance of one object to the next and

Ratio, which contains all of the above, but also has an absolute zero point. In the example above, the variable number of students is ordinal and the grade point average is also ordinal, so it is a correlative relationship.
Correlation describes the relationship or degree of association that exists between variables. We can conclude that small class size has had a positive effect on grades. The decrease in number of students in a class attributed to an increase in grades. This is a negative correlation. If an increase in number of students led to an increase in grades, then that would have been a positive correlation.


Step 3: Determine if the relationship is statistically significant
Statistical significance is used to determine whether the results are significant enough to truly make a connection. In other words, do we think the results occurred by chance, or do we truly expect to see the same results with another similar study population? In many types of studies, a relationship is considered significant (the association seen in this sample is not occurring randomly or by chance) if it has a significance level of .05. This means that in only 5/100 times will the pattern of observations for these two variables that we have measured occur by chance.


Step 4: Identify the strength of the relationship
To determine whether a bivariate correlation is significant researchers choose a standard formula depending upon the type of data used. For example, Pearson’s correlation coefficient measures the strength of linear relationship between X and Y. The relationship between two ordinal variables can be measured by using a formula entitled Spearman’s rho. Spearman’s rho calculates a correlation coefficient on rankings rather than on the actual data. In our example, we looked at how smaller class sizes led to higher grade point averages. Both the number of students in a class and the grades can be ranked.

Spearman’s rho will vary between –1 and +1, with –1 being a perfect negative correlation (if you rank high on X, you will rank low on Y), +1 being a perfect positive correlation (if you rank high on X, you will rank high on Y), and 0 being no relationship between the two (rank on X tells us nothing about rank on Y).

There are several other formulas that can be used to measure significance based on type of data used including Kendall’s Tau, Kendall’s Tau-B, Tau-c, Goodman-Kruskal Gamma, Chi-square(x2), Lambda A, Mann Whitney U-test, Wilcoxon Signed-Rank Test.


Multivariate Analysis
Multivariate studies are similar to bivariate studies, but multivariate studies have more than one dependent variable. For example, if an advertiser wanted to examine the effectiveness of three different banner ads on a popular website, the advertiser could measure the ads click rate for both men and women. Researchers could then use multivariate statistical analysis to examine the relationships between all of the variables.

Multivariate analytical techniques represent a variety of mathematical models used to measure and quantify outcomes, taking into account important factors that can influence this relationship. There are several multivariate analytical techniques that one can use to examine the relationship among variables. The most popular is multiple regression analysis which helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Other techniques include factor analysis, path analysis and multiple analyses of variance (MANOVA).