What is Statistics?
Statistics is the science of systematically collecting and interpreting data. There are two main areas of statistics, Descriptive Statistics and Inferential Statistics.
Descriptive Statistics deals with the collection, description, and presentation of sample data while Inferential Statistics is about drawing conclusions and making decisions about populations.
In the contemporary world, Statistics plays an important and often crucial role as it provides the foundation for key decisions and strategic choices.
One of the main objectives of statistics is measuring and/or characterizing variabilities. For example, controlling or reducing variabilities in manufacturing processes. This is called “Statistical Process Control”.
To be successful on the GED® Science test, it is important that you understand the basics of how scientific experiments work and what words and expressions are used.
And on the GED Math test, there will be some questions about probability, range, mean, median, and mode, so it is key that you know what all of that means and how to use it.
Just like any other science, Statistics uses a number of basic words and terms specific to this field. Let’s take a closer look at the most frequently used words and expressions in the world of Statistics.
First, let’s take a look at some words and expressions that you really must understand to pass the GED Math and Science Tests. So here is more information about range, mean, median, and mode.
Measures of Central Tendency: Commonly, there are three measures used in Statistics, Mean, Median, and Mode. These measures help us find the average, or middle, of data sets.
- The Mean is the sum of all values divided by the number of the values
- The Median is the middle number in ordered data sets
- The Mode is the most frequent listed value
Mode, median, and mean will tell us a single value representative or typical of all the values within a data set.
Measures of Variability (Measures of Spread): Measures of Variability or Spread tell us how varied or similar a set of values is for a specific variable (a data item). The most important measure of spread that you’ll see on the GED exam is range.
Examples of Measures of Spread are range and sample standard deviation. Measures of spread show us how scattered the values in a data set are, and how much or in what way they differ from a data set’s mean value.
Population: In Statistics, the population is a set of all relevant measurements (topics or items of interest) to the researcher, the sample collector.
So in statistics, the word “population” has a different meaning than in ordinary speech. It doesn’t necessarily refer to humans or animals. Statisticians use the word population also when they refer to events, objects, observations, or procedures. In statistics, a population is an aggregate of things, cases, creatures, and so on.
Parameter: In Statistics, parameters are characteristics of populations. Parameters are numerical values that summarize data of entire populations. A parameter is different from a Statistic.
A Parameter is a number that summarizes data for entire populations while a Statistic is a number summarizing data from a sample, which is a subset of the entire population.
Statistic: So a statistic is a numerical value that summarizes the sample data. Statistics are characteristics of samples drawn from populations.
Variable: variables are characteristics of individual elements of a sample or population. Statisticians use two kinds of variables. Qualitative (or Attribute, or Categorical) variables categorize or describe elements of populations. Quantitative (or Numerical) variables quantify elements of populations.
Data: When the word data is used singular, it refers to the value of a variable associated with one singular element of a sample or population. This value could be a word, number, or symbol.
Data: When data is used in a plural way, it refers to the entire set of values that the statistician collected for a variable from each element of the sample.
Experiment: Experiments are planned activities that result in sets of data. Experiments are controlled studies in which researchers attempt to comprehend the relationships between cause and effect. These studies are “controlled” since the researchers control how subjects and elements are assigned to a group and which treatment(s) each group will receive.
Accuracy: Accuracy tells us how close computed or measured values are to their true values. It tells us how close the sample estimates are to the real population. Accuracy is affected by nonsampling errors, for example, errors from improperly executed or designed sampling plans, or methods of measurement.
Precision: Precision tells us how close to each other repeated measurements of the same quantities are. It tells us about the reliability and the consistency of measurement in statistics.
Sampling Error: Sampling errors are standard deviations in estimates and not in individual observations or studies. There will always be some discrepancies between the population parameters that are estimated and the sample statistics, regardless of the size of the sample. In general, though, we can say that the larger our sample, the more likely the result will represent the entire population.
Standard Error: This is a mathematical expression for sampling error. If the standard error is small, the reliability measure will be good. Usually, the term “standard error” is used for the mean. It indicates the variation amount among means from many samples.
Confidence Interval: Confidence Intervals are ranges of values of which we can be pretty sure that our true values lie within. A Confidence Interval is a range + or a range – from our sample mean. The value we see after the ± sign is what we call the “margin of error”.
Confidence Level: Well, Statistics has to do with drawing conclusions and making predictions in the face of uncertainties. Whenever we take a sample, we can never be fully certain that our sample reflects the population where it’s drawn from in a true way. A statistician deals with these uncertainties by taking into account and quantifying the factors that could possibly affect the outcomes.
Correlation: In Statistics, correlation is commonly used to describe relationships without making statements about cause and effect. It is a statistical measure expressing to what extent two variables are linearly related. Linearly related means that they are changing together at constant rates.
Correlation doesn’t take into account the effect or presence of other variables except for the two that are being explored. It is important to note that correlation tells us nothing about cause and effect. Correlation is all about the way two or more variables are fluctuating with reference to one another.
Positive correlation: We speak of “Positive Correlation” when there is a relationship where two variables move in the same direction (in tandem). We use the term positive correlation when, as one of the variables decreases, the other one decreases as well, or when one of the variables increases, the other will increase as well. For example, in this sentence: We see that when education increases, people’s income also increases.
Negative correlation: We speak of “Negative Correlation” between two variables when, if one of the variables decreases, the other will increase, and vice versa. The following sentence is a good example: We see that when education increases, the number of students decreases.
Correlation coefficient: In statistics, perfect negative correlations are represented by the value -1.0. This is the correlation coefficient. a 0 coefficient indicates zero correlation, while +1.0 is the indication of a perfect positive correlation. The perfect negative association is -1. The perfect positive association is +1.
Last Updated on August 31, 2021.