1. Introduction to t-tests
Read Tutorial 8 of Tredoux, C.G. & Durrheim, K.L. (2018). Numbers, hypotheses and conclusions. 3rd edition. Cape Town: Juta Publishers (but the 2nd edition could also be used)
Often in Psychological research, we are interested in testing whether there are differences between groups.
For instance, we might be interested in testing whether Cognitive Behavioural Therapy (CBT) works better at treating depression than psychodynamic therapy.
How would we do this?
We would need to compare the average post-treatment depression score for participants who received CBT against the average post-treatment depression score of participants who received psychodynamic therapy (assume higher depression scores reflects higher levels of depression).
Ideally, we would want to compare the population mean depression following CBT against the population mean depression following psychodynamic therapy, but we typically only have access to samples
We would need to take the mean post-treatment depression score for each group (CBT & Psychodynamic) and compare them
| CBT | Psychodynamic |
|---|---|
| 24 | 36 |
| 23 | 28 |
| 25 | 34 |
| 21 | 29 |
| 14 | 45 |
| 20 | 41 |
| Mean = 21.2 | Mean = 35.5 |
A t-test is an inferential test that can be used to compare mean scores (in this case mean depression) of samples, and allows us to infer to the population without necessarily having access to the entire population.
The t-test, much like the z-test, is an inferential test, except that the population standard deviation is unknown, whilst it is known when conducting the z-test.
Exercise 1.1.1.1
Hint: More than one answer is correct, choose any correct answer.
1.1. Logic of T-tests
Even when sample means come from the same population, it is very unlikely that any two sample means will ever be exactly the same. Imagine obtaining a mean depression score of exactly 30 for CBT and 30 for psychodynamic therapy for a sample of 6 in each group, it seems very unlikely. This is the result of sampling error, the natural error we expect to see in our sample estimates.
However, the bigger the size of the difference in sample means, the more likely there is to be a true difference in population means, rather than being merely the result of sampling error.
The t-test assesses the size of the difference between sample means or mean difference, by testing the likelihood that sample means come from the same population. In the case of the CBT and psychodynamic example we could
- fail to reject \(H_{0}\). We
conclude that post-treatment depression is equal across CBT and
psychodynamic therapies (i.e. suggesting that neither therapy is
superior)
- Any difference in sample means is due to sampling error
- reject \(H_{0}\) for \(H_{1}\). We conclude that post-treatment
depression is lower for CBT relative to psychodynamic therapy (i.e. CBT
is more effective than psychodynamic therapy in treating depression)
- Difference in sample means reflects a combination of both sampling error and the true difference in population means
How does the t-test accomplish this? Roughly speaking,
- based on the Central Limit Theorem (CLT), we know that sample means will differ from each other, on average, by the standard error
- t-test calculates the standard error and compares the 2 sample means in terms of the standard error
- t-test sets up a signal to noise ratio, where signal is the difference in means, and noise is the size of the standard error estimate.
- t-test then assesses whether the signal is large enough relative to noise to conclude that the sample means come from different populations
If the idea of ‘signal to noise ratio’ is new to you, or you don’t immediately grasp it, don’t worry, in the next section we will be looking more at this idea.
1.2. Sample estimates from the same population: An illustration of “Noise”
Light and Relaxation therapy for
insomnia
Let’s say we know that treated insomnia patients typically get 5.5 hours
of sleep on average, with a standard deviation of 2. We draw a random
sample of 20 insomnia patients who received light therapy, and obtain a
sample mean (\(\bar{X}\)) of hours
slept of 6.5, and standard deviation of 2. We draw another random sample
of 20 insomnia patients who received relaxation therapy, and who
obtained a sample mean (\(\bar{X}\)) of
hours of sleep of 6 with a standard deviation of 2.5.
Let’s now assume that there is no difference in the effect of light and relaxation therapy on hours of sleep acquired.
Exercise 1.2.1.1
Exercise 1.2.1.2
Run the code chunk below, and check the average number of hours slept for each sample of treated insomnia patients who have undergone either light or relaxation therapy. Now run the code chunk for a second time, and see how the means change! The code samples 10 scores from the population of treated insomnia patients (where mean and standard deviation is known)
Note: Notice that the set.seed()
command is missing. If it is not specified, then a new random sample
will be drawn each time the code is run
light.therapy =rnorm(10, mean = 6.5, sd = 2)
relaxation.therapy =rnorm(10, mean = 6, sd = 2)
c(mean(light.therapy), mean(relaxation.therapy))
1.3. Sample estimates from different populations
Light and Relaxation therapy for insomnia
Now let’s say that the average hours slept does in fact differ between insomnia patients treated with light therapy versus those treated with relaxation therapy. The population mean (\({\mu}\)) for light therapy is 7 hours, with a standard deviation of 1.5 (\({\sigma}\)), whilst the population mean for relaxation therapy is 5 hours (\({\mu}\)), with a standard deviation of 1.5 (\({\sigma}\))
See a visual depiction of different population distributions for light and relaxation therapy below. Notice that the distributions appear to have different central tendency (means or medians) but there is still quite a lot of overlap.

2. Equation and interpretation
2.1 General form of the t-test
There are three types of t-tests, but they all follow the same form:
\[ \begin{aligned} t =\frac{difference\ between\ the\ means}{standard\ error} \end{aligned} \]
or in other words…
\[ \begin{aligned} t =\frac{signal}{noise} \end{aligned} \]

As mentioned previously, the t-test is much like the z-test except that the population standard deviation is unknown (known) for the t-test (z-test), and thus sample information is used to estimate the population standard deviation. See this difference illustrated in the formula below
\[
\begin{aligned}
z =\frac{\bar{x} - \mu}{\frac{\sigma}{\sqrt{n}}} \ vs \
t =\frac{{\bar{X}_{1}} - {\bar{X}_{2}}}{\frac{s}{\sqrt{n}}}
\end{aligned}
\] \(\bar{X}_{1}\) = mean of
sample 1 (e.g. Light therapy)
\(\bar{X}_{2}\) = mean of sample 2
(e.g. Relaxation therapy)
2.2 Degrees of freedom (df)
Degrees of freedom (df) refer to the number of values that are free to vary, a rule that is commonly applied in inferential tests. Degrees of freedom relate to the standard deviation, and deviations from the mean must always add up to 0. This is because the mean is the mid-point between minimum and maximum values, where the sum of deviations for values greater than the mean (positive sum) will equal the sum of deviations for values less than the mean (negative sum).
Degrees of freedom formula for t-tests:
\[ \begin{aligned} {single\ group\ or\ matched\ pairs}, df = n-1 \end{aligned} \]
df = degrees of freedom
n = sample size
\[ \begin{aligned} {two\ independent\ groups}, df = n_{1}+ n_{2}-2 \end{aligned} \]
df = degrees of freedom
\(n_{1}\) = group 1 sample size
\(n_{2}\) = group 2 sample size
These degrees of freedom formulas suggest that a degree of freedom is lost when a t-test is computed.
One easy way to think about degrees of freedom:
Let’s say we have a sample of 5 scores including: (1,3,5,7,9), whose mean is 5 and the sum of the deviations from the mean is therefore (1-5)+(3-5)+(5-5)+(7-5)+(9-5) = 0
If we were given all the values except for one, we could work out the last value. Thus, this value is “fixed” and therefore is not free to vary. In other words, the freedom to be varied has been lost for the last value.
Exercise 2.2.1.1
Exercise 2.2.1.2
2.3 Hypothesis testing and t-tests
Hypothesis testing with respect to the t-test is very much like the z-test, where the null hypothesis is being tested (i.e. the null hypothesis is assumed to be true)
In the case of a t-test:
- The null hypothesis (\(H_{0}\)) states that there is no difference between means
- The alternate hypothesis (\(H_{1}\)) states that either
- there is a difference between means (non-directional hypothesis) or
- one mean is greater than/ smaller than another mean (directional hypothesis)
In symbolic form:
- \(H_{0}\): \({\mu_{1}}\) = \({\mu_{2}}\)
- \(H_{1}\): \(\mu_{1} \neq \mu_{2}\) OR \(\mu_{1} > \mu_{2}\) OR \(\mu_{1} < \mu_{2}\)
An example (Treatments for insomnia):
- \(H_{0}\): mean hours of sleep is equal for insomnia patients receiving light or relaxation therapy
- \(H_{0}\): \({\mu_{Light therapy}}\) = \({\mu_{Relaxation therapy}}\)
- \(H_{1}\): mean hours of sleep is higher for insomnia patients receiving light therapy versus those receiving relaxation therapy (directional hypothesis)
- \(H_{1}\): \({\mu_{Light therapy}}\) > \({\mu_{Relaxation therapy}}\)
Exercise 2.3.1.1

Hint: Take note of the symbolic formulation of the alternate hypothesis above. Think about which mean is greater (light or relaxation therapy), and thus on which tail of the distribution you would expect the estimate to fall
3. Assumptions
The T-test is only appropriate to use when particular assumptions about the data are met, which need to be checked before a t-test can be run. If one were to run a t-test without checking the assumptions, the analysis may be considered invalid, and the results unreliable.
The 3 major assumptions that apply to a t-test include:
- Normality
- Homogeneity of variance
- Independence of observations
3.1 Normality assumption
T-tests assume your data is normally distributed. Put more formally, that your samples come from populations that are normally distributed.
In order to check for normality, one can draw histograms of the data for each group.
Caffeine and muscle metabolism
Of 26 participants, 13 were randomly allocated to either a placebo or caffeine condition (where one either received a pure caffeine pill or a sugar pill), after which participants engaged in an exercise task where their muscle metabolism was measured. Muscle metabolism was measured using the participant’s respiratory exchange ratio (RER), where RER is the ratio of carbon dioxide produced to oxygen consumed (%)
| Placebo | Caffeine |
|---|---|
| 92 | 94 |
| 107 | 95 |
| 95 | 93 |
| 95 | 90 |
| 90 | 92 |
| 98 | 97 |
| 98 | 98 |
| 96 | 94 |
| 99 | 96 |
| 102 | 94 |
| 100 | 91 |
| 103 | 94 |
| 99 | 95 |
Exercise 3.1.1.1
Store the scores presented above in two different data frames, one named Placebo and the other Caffeine
Note: Use the c() to combine scores.
Remember to use the ? beforec() to inspect the help file on
this function called the concatenate function.
Hint: Be sure to check that you have 13 observations in each vector. If they do not match up evenly, you will have trouble later on.
... = c(...)
... = c(...)
placebo = c(92, 107, 95,95,90, 98,98, 96, 99, 102, 100, 103, 99)
caffeine= c(94, 95, 93,90,92, 97,98, 94, 96, 94, 91, 94, 95)
Exercise 3.1.1.2
Now plot the distributions of placebo and caffeine alongside each other
Note: Run your vectors placebo and
caffeine again in code chunk below before coding for the
histogram as the code for vectors do not automatically save for the next
code chunk.
par(mfrow = c(...,...))
hist(..., xlab ="...", main= "...")
hist(..., xlab ="...", main= "...")
par(mfrow = c(1,2))
hist(Placebo, xlab= "Respiratory Exchange Ratio (%)", main ="Distribution of muscle metabolism\n under Placebo condition")
hist(Caffeine, xlab = "Respiratory Exchange Ratio (%)", main ="Distribution of muscle metabolism\n under Caffeine condition")
Exercise 3.1.1.3
Hint: More than one answer is correct, choose any correct answer.
Exercise 3.1.1.4
3.2 Homogeneity of variance assumption
Homogeneity of variance refers to the similarity or uniformity of groups. More specifically, it means that the two groups need to have similar/equal variances
Note: Of the three hypothetical scenarios presented below, every one displays a distribution for each of two groups

Exercise 3.2.1.1
Formal tests can be used to check for homogeneity of variance, but for now we will just use the k-ratio rule of thumb: One group’s variance should not be more than 4 times bigger than another’s. In other words, if the larger variance divides into the smaller variance by more than 4 (k = [larger variance]/[smaller variance]), the homogeneity of variance assumption is violated. In other words, we have heterogeneity of variance, or unequal variances.
Exercise 3.2.1.2
Determine if Homogeneity of variance is present between the Placebo and Caffeine groups
Hint: calculate the variance for each group first
before comparing them, using the var() command
#You many want to calculate variance for each group first, in order to find which group has the larger variance
var(...)/var(...)
var(Placebo)/var(Caffeine)
Exercise 3.2.1.3
Hint: More than one answer is correct, choose any correct answer.
3.3 Independence assumption
Groups need to be independent of one another in order for a t-test to be run. In other words, there should be different, unrelated people in each group. In the study of caffeine on muscle metabolism, this would mean that participants who received the caffeine pill need to differ from those who received the sugar pill.
The independence assumption ensures that the scores in each group have no relationship with each other.
Note that the independence assumption does not apply to paired-sample t-tests
Exercise 3.3.1.1
4. Single sample
There are 3 types of t-tests:
- Single sample t-test: where a single mean is compared against a known population mean
- Independent samples t-test: where two sample means from independent groups are compared
- Repeated-measures t-test/Dependent samples t-test/Paired t-test: where two sample means from related groups are compared
Exercise 4.0.0.1
4.1 Single sample t-test
In this section, we will be discussing single sample t-tests
What is required in order to run a single sample t-test? 1. Single sample (i.e. group of participants) 2. Known population mean 3. Unknown population variance/standard deviation
A single sample t-test is much like a z-test, except that the population variance is unknown. If the population variance were known, then a single sample z-test could be run.
See single sample t-test formula below:
\[
\begin{aligned}
t =\frac{\overline{X} - \mu}{\frac{s}{\sqrt{n}}}
\end{aligned}
\] \(\overline{X}\) = sample
average
\({\mu}\) = population average
\({s}\) = sample standard
deviation
\({n}\) = sample size
Smoking and systolic blood pressure study 1
In a series of studies investigating the relationship between smoking and systolic blood pressure, systolic blood pressure was measured amongst 10 smokers. Data is stored in the .csv file named “smoking”. The data set contains the following variables
| Column names | Description |
|---|---|
| particID | Participant identification number |
| sbp | Systolic blood pressure |
Exercise 4.1.1.1
Exercise 4.1.1.2
Import the “smoking.csv” data set
Hint: See Guided Tutorial 1 for a reminder on how to import external data sets
Hint: Check that you’ve successfully loaded the smoking data by using the structure function (i.e. str(…))
smoking = read.csv("...")
smoking = read.csv("smoking.csv")
Exercise 4.1.1.3
Assume that the population mean systolic blood pressure for non-smoking adults is 102. Calculate the single sample t-test t-statistic, using the formula
Hint: Use systolic blood pressure data from smokers contained in “smoking” data frame
Reminder: Remember when working with data frame
objects (made up of both rows and columns), variables are extracted
using the $ sign (See Guided Tutorial 1)
tstat1=(mean(...$...) - 102)/(sd(...$...)/sqrt(...))
tstat1
tstat1=(mean(smoking$sbp) - 102)/(sd(smoking$sbp)/sqrt(10))
tstat1
Exercise 4.1.1.4
Assuming a 1-tailed test, calculate the p-value associated with t statistic calculated in the previous exercise, using the pt() command.
Hint: The pt() command works much like
the pnorm() command, but instead of providing mean and
standard deviation, the degrees of freedom are specified under the
argument df
Reminder: See section 2.2 for how to calculate the degrees of freedom for a single sample t-test
1-pt(tstat1, df = ...)
1-pt(tstat1, df = 9)
Exercise 4.1.1.5
Hint: More than one answer is correct, choose any correct answer.
An alternate way of conducting a single-sample t-test, and any other
t-test for that matter, is by using the t.test() command.
The t.test() command has many arguments (here are
some):
| Argument | Description |
|---|---|
| x | Sample scores for group x (numeric) |
| y | Sample scores for group y (numeric) |
| alternative | indicates whether hypothesis is directional or non-directional(includes options: “greater”,“less” or “two.sided”) |
| mu | value of the true population mean specified by the null hypothesis |
| paired | indicates whether t.test is paired or not (includes options: “TRUE” and “FALSE”) |
| var.equal | indicates whether homogeneity of variance is upheld or not (includes options: “TRUE” and “FALSE”) |
| conf.level | confidence interval, set to 0.95 by default |
Exercise 4.1.1.6
Read the help file for the t.test() command
Note: You should inspect the help file for
t.test() command, by adding a ? before the
command
?t.test
Exercise 4.1.1.7
Use the t.test() command to conduct a 1-tailed single sample t-test of sample mean systolic blood pressure of smokers compared to the population mean of non-smokers
t.test(...$..., alternative ="...", mu=...) # mu = population mean
t.test(smoking$sbp, alternative ="two.sided", mu=102) # mu = population mean
The output provided from the single t-test includes:
- t statistic –> t= 19.16
- degrees of freedom –> df = 9
- p-value –> p-value < 0.001 (in fact, smaller than \(6.63 *10^{-9}\))
- 95 percent confidence interval –> [123.70, 129.51]
- mean of sbp –> 126.6
5. Independent samples
An independent t-test is conducted on the means of two groups (i.e., two sample means) whose scores are independent of each other.
The independence assumption is a critical assumption to uphold in order to run a independent t-test.
The homogeneity of variance assumption is also important when running an independent t-test, however, an alternate t-test formula can be used if homogeneity of variance is violated.
In the case of homogeneous variance, we use a POOLED ESTIMATE OF VARIANCE, \({s_{p}^2}\), i.e., when k ratio < 4 (see section 3. Assumptions).
In the case of heterogeneous variance, we use SEPARATE VARIANCE ESTIMATES, \({\sqrt{\frac{s^2_{1}}{n_{1}}+ \frac{s^2_{2}}{n_{2}}}}\) i.e. when k ratio > 4 (see section 3. Assumptions).
The pooled estimate formula for variance is used in the t-test when homogeneity of variance is upheld (take note of the denominator):
\[
\begin{aligned}
t =
\frac{\overline{X_{1}}-\overline{X_{2}}}{\sqrt{s_p^2(\frac{1}{n_1}+\frac{1}{n_2})}}
\end{aligned}
\] \(\overline{X_{1}}\) = Sample
mean of group 1
\(\overline{X_{1}}\) = Sample mean of
group 2
\({n_{1}}\) = sample size of group
1
\({n_{2}}\) = sample size of group
2
\({s_{p}^2}\) is calculated using the
formula below
Note that although \({s_{p}^2}\) looks complicated it is just a way of weighting the individual variances of the two samples by their samples sizes (or degrees of freedom, which are essentially their sample sizes, slightly corrected).
\[ \begin{aligned} s_{p}^2 = \frac {(n_{1} - 1)s_{1}^2 + (n_{2} - 1)s_{2}^2}{n_{1} + n_{2} - 2} \end{aligned} \]
We use a different t-test formula, the Separate variances formula, for the t-test where homogeneity of variance is violated (take note of denominator):
\[ \begin{aligned} t = \frac{\overline{X_{1}} - \overline{X_{2}}}{\sqrt{\frac{s^2_{1}}{n_{1}}+ \frac{s^2_{2}}{n_{2}}}} \end{aligned} \] An important difference is that we now have to take a different value for df, the degrees of freedom. The course textbook advises you to take the smaller of \({n_{1}}-1\), and \({n_{2}}-1\), although there are more precise methods available (see https://en.wikipedia.org/wiki/Welch%27s_t-test), and those are used in R, rather than the simpler, more conservative method the textbook recommends.
Smoking and systolic blood pressure study 2
In a series of studies investigating the relationship between smoking and systolic blood pressure, systolic blood pressure was measured amongst 10 non-smokers. Researchers believe that systolic blood pressure is higher amongst smokers than non-smokers. Data is stored in the .csv file named “nonsmoking”. The data set contains the following variables
| Column names | Description |
|---|---|
| particID | Participant identification number |
| sbp | Systolic blood pressure |
Note: Data for 10 smokers is stored in the data set “smoking”, whilst data for 10 nonsmokers is stored in the data set “nonsmoking” You imported the smoking data set in an earlier exercise, and the data for the non-smoking dataset is available to you pre-imported, in these exercises, and is stored in the data frame nonsmoking.
Reminder: Make use of the help page for t.test (i.e. “?t.test”) if you are unsure what the “alternative” argument calls for
Exercise 5.1.1.1
Using data from both the “smoking” and “nonsmoking” data frames, calculate the ratio of variances between the group with the larger variance and the group with the smaller group variance
var(...$...)/var(...$...)
var(smoking$sbp)/var(nonsmoking$sbp)
Exercise 5.1.1.2
Use an appropriate formula to calculate the variance (based on the answer calculated above)
var.estimate=((10-1)*...(...$...)+ (10-1)*...(...$...))/(10+10-2)
var.estimate
var.estimate=((10-1)*var(smoking$sbp)+ (10-1)*var(nonsmoking$sbp))/(10+10-2)
var.estimate
Exercise 5.1.1.3
Use the independent t-test formula to calculate the t statistic. You need to re-compute the var.estimate, though, within this chunk.
ITtest.tstat =(mean(...$...)- mean(...$...))/sqrt(var.estimate*((1/...)+ (1/...)))
ITtest.tstat
var.estimate = (9*var(smoking$sbp) + 9*var(nonsmoking$sbp))/(18)
ITtest.tstat = (mean(smoking$sbp) - mean(nonsmoking$sbp))/sqrt(var.estimate*(2/10))
ITtest.tstat
A more efficient way of running an independent t-test is to use the
t.test() command
Exercise 5.1.1.4
Use the t.test() command to run an independent t-test of the effect of smoking on systolic blood pressure. Assume a directional hypothesis, where systolic blood pressure is higher for smoking than nonsmoking group.
Reminder: See section 4.1 for an example of how to
use the ttest()command to run a single t-test
Hint: In the case of an independent t-test, data for two groups must be provided (not just for one group in the case of a single t-test)
t.test(...$..., ...$..., alternative ="...")
t.test(smoking$sbp, nonsmoking$sbp, alternative ="greater")
6. Repeated measurements
We use a repeated-measures t-test, otherwise known as a dependent samples or paired t-test, when our groups are not independent. For example, if we collect data from the same participants over two or more occasions, this would be considered a repeated measures design.
See formula below, where D stands for difference scores (i.e. difference between each pair of scores)
\[ \begin{aligned} t = \frac{\overline{D}}{\frac{s_{D}}{\sqrt{n}}} \end{aligned} \]
\(\overline{D}\) = average
difference score
\(s_{D}\) = standard deviation of the
difference scores
\(n\) = sample size
Smoking and systolic blood pressure study 3
10 smokers received nicotine replacement therapy, and had their systolic blood pressured measured both pre- and post-treatment. Data is stored in the data frame named “smoking.treat”. The data set contains the following variables:
| Column names | Description |
|---|---|
| particID | Participant identification number |
| pre.sbp | Systolic blood pressure prior to treatment |
| post.sbp | Systolic blood pressure post treatment |
Exercise 6.1.1.1
Assume the average difference score of systolic blood pressure for smokers is equal to 1.7 and the standard deviation of difference scores is 2. Use the paired t-test formula provided above to calculate the t statistic
.../(.../sqrt(...))
1.7/(2/sqrt(10))
Exercise 6.1.1.2
Use the t.test() command to run a paired t-test on systolic blood pressure from pre- to post- treatment. Assume a non directional hypothesis
Hint: the “paired” argument within the
t.test() command is set to FALSE by default, and needs to
be set to TRUE in the case of paired t-test
t.test(...$..., ...$..., alternative ="...", paired =...)
t.test(smoking.treat$pre.sbp, smoking.treat$post.sbp, alternative ="two.sided", paired =TRUE)
7. Advanced Section
The t.test() command is fairly versatile and can take on
different forms.
Previously, we saw it in the following form (below) when we wanted to run a independent t-test to assess whether average systolic blood pressure differed between smoking and nonsmoking groups
t.test(smoking$sbp, nonsmoking$sbp, alternative ="greater")
In this case, we had two data frames, each consisting of one column of values that capture the systolic blood pressure of either smoking or nonsmoking participants.
In the case where one has one data frame that contains data from both
groups (instead of two separate data frames), a formula can be inserted
into the t.test() command. For example,
t.test(y.variable ~ x.variable, data = dataname), where
y.variable is the name of the continuous y variable; x.variable is the
name of the group (i.e. categorical) x variable; ~ (tilde)
means to regress on, where the y variable always precedes the
~ and the x variable follows the ~
Exercise 7.1.1.1
First import the data set entitled “smoking.fulldata.csv”, save as an object named “fulldata” and inspect the first 6 rows
...=read.csv("...")
...(...)
fulldata =read.csv("smoking.fulldata.csv")
head(fulldata,6)
Exercise 7.1.1.2
Use the t.test() command to run an independent t-test using the “fulldata” object
t.test(...~..., data = ...)
t.test(sbp~smoker, data = fulldata)
The result that R prints out for us tells us that the data were given in formula form (sbp by smoker). It gives us the result of the test t = -5.37, with the df = 17.90 (see the point earlier about the df being different for t-tests in which variance is not assumed to be homogeneous), and with the p value clearly much smaller than .05. It also gives us the alternative hypothesis, and reports the 95% confidence interval around the difference of the two means (which spans the interval -13.08 to -5.72 - notice that 0 is not included in the interval, which is an equivalent way of testing significance). It also usefully gives us the two sample means.
Free-form exercise
Move on to and complete Statistics Tutorial Assignment 6 on Amathuba (Activities | Assignments)
Other resources & references
Developed by: Marilyn Lake & Colin Tredoux, UCT