The graph illustrates that we might, by chance, collect samples whose means differ greatly from the true population mean of 6 (even though the probabilities of doing so are low.) Statistical theory predicts how much sample means will vary from their expected value. Where E refers to the statistic's "expected value." (This is true regardless of the population's distribution it doesn't have to be normally distributed.) We expect a sample mean to equal, on average, the unknown population mean. In fact, statistical theory assures us that all these sample means will have a collective mean that exactly equals the population mean m. The graph illustrates how, when this particular null hypothesis (H 0: m=6) is true, we will very often draw samples whose means are close to 6. The graph's vertical axis shows how often we randomly chose samples whose means equalled the values listed on the horizontal axis. To generate the graph below, we drew 10,000 samples, each with 10 observations, from a normal population of values with a known mean ( m=6) and variance ( s 2=2.5). In particular, if we can assume that we are measuring an outcome variable whose values are normally distributed, then statistical theory lets us state that the many samples that we might draw have means that are also normally distributed. We can be very specific about the relationship between the sample mean and the unknown population mean m if we can justify certain assumptions. We'll occasionally collect a sample whose mean is quite different from the true value. However, we'll also collect samples whose means are smaller (like that of X 1) or larger (like that of X 2) than the true parameter. Common sense suggests that, if we collect a sample not once but many times, the samples' means would typically be close to, and often identical to, the population mean that forms the basis of the null hypothesis. The vertical axis summarizes the frequencies with which we might obtain particular values for the sample mean. This graph's vertical axis is a "second dimension" that illustrates the results we might obtain were we to draw many samples from a population. To illustrate the relationship between the sample mean and the hypothetical but unknown population mean m, we add a second dimension to the "number line." Somewhere on the number line is the true but unknown population mean m. Depending on the sample that we draw by chance, the mean's value could be anywhere on the illustrated number line. In that respect, the sample mean is a continuous variable that could take on many values. Although we collect just one sample, and therefore calculate a single sample mean, we understand that the sample that we have drawn is one of many that we might have drawn. For example, we can calculate a t-statistic using the sample mean and sample variance. We calculate test statistics from information that we obtain from the sample. calculating the test statistic (in this case, a t statistic) using sample data.In our example, which states a null hypothesis in terms of the population mean, a relevant test statistic is the t. identifying a test statistic that relates to the hypothesized and unknown population parameter.This requires knowing whether the outcome of interest can be summarized as, for instance, a mean, a count, or a proportion.įor example, when we can measure the outcome variable at the interval or ratio scale, we can formulate a null hypothesis in terms of the population mean, which is designated by the greek symbol m. specifying a null hypothesis (H 0) that relates to a population parameter.However, test statistics are designed to evaluate not the research hypothesis, but a specific null hypothesis. Online calculator for Bonferroni adjustments (of alpha or z) for multiple comparisons.Ī research hypothesis drives and motivates statistical testing.Power and sample size programs, University of California - San Francisco, including links to free programs. Russ Lenth's power and sample size page, including Java applets that explore influences of sample and effect sizes on power.txt files with SAS programs that create the graphs that appear on this page: Factors that determine the location of the rejection regionsĭownload.This discussion illustrates the core concepts by exploring the t-test on a single sample of independent observations. Hypothesis testing and power Hypothesis testing and statistical powerĪll power and sample size calculations depend on the nature of the null hypothesis and on the assumptions associated with the statistical test of the null hypothesis.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |