Why do theories have to be falsifiable?
Assess statistical analysis critically: Do not trust any statistics that you have not falsified yourself
In everyday life we come across numerous statistics in the media. But the numbers are often, intentionally or unintentionally, presented in a misleading way and misinterpreted. In this article we would like to make you aware of possible sources of error and influencing possibilities and explain how you can avoid them. We present typical pitfalls in statistics which inexperienced users can quickly lead to errors, which can lead to biased results, and which you should therefore keep in mind for your statistical analysis. We explain to you how you can operate statistics without a bias and what the crucial difference between correlation and causality is. You will also learn why internal validity and external validity are so important.
Which questions are answered in this article?
- What is important in terms of data quality for statistical analysis?
- How can a bias be avoided?
- What types of samples are there?
- How are you manipulated by the way data graphics are presented?
- What is the difference between correlation and causality?
- How can you ensure internal and external validity?
Statistical analysis: correlation and causality - a key difference
In this section we introduce you to popular sources of error and threats to validity. The following quote is often ascribed to Winston Churchill: "Do not trust any statistics that you have not falsified yourself!". And how can you best falsify a statistic? By starting with the basics of any statistical analysis - the data. By falsifying data, you can prove any statistical hypothesis you want. Therefore, we are now introducing you to quality criteria for good data quality:
- correctness: The data must match the reality.
- consistency: A data record must not have any contradictions in itself or to other data records.
- reproducibility: The creation of the data must be traceable and reproducible.
- completeness: A data record must contain all the necessary attributes.
- accuracy: The data must be available with the required accuracy (example: decimal places).
- Topicality: All data sets must correspond to the current state of the depicted reality.
- Freedom from redundancy: No duplicates may occur within the data records.
- relevance: The information content of data sets must meet the respective information requirements.
- uniformity: The information in a data record must be structured uniformly.
- Uniqueness: Each data record must be clearly interpretable.
- Comprehensibility: The terminology and structure of the data records must match the ideas of the departments.
Bias is the English term for distortion and occurs when you systematically overestimate or underestimate a factor in the statistical analysis. For example, if you want to estimate the per capita consumption of wheat beer in Germany and you ask people at the Oktoberfest how much beer they drank, then you will clearly overestimate the consumption of the average person. The sample is not representative of the population, which is why one speaks of selection bias. You can create statistics without bias by drawing a random representative sample.
A random sample is therefore necessary if you want to calculate statistics for your data without significant bias. A distinction is made between the following types of random samples:
- single-stage and multi-stage procedures (grading)
- stratified random sample (stratification)
- Cluster sample (clumping)
Even if you do everything right with your data collection, draw a representative random sample, and follow the rules of good data quality, you can still visualize the correct data in a misleading way. When looking at stock prices, you've probably noticed the clipped feet of the data graph. To save space, the lower part of the vertical axis is cut off and the interesting part of the graphic is enlarged. This makes the share price appear much more dynamic than it actually is. The effect can be further enhanced by stretching the vertical axis and compressing the horizontal axis. If you select the section of the horizontal axis appropriately, you can fake a sharply rising or falling share price, depending on your preference. The dizziness is only noticeable when you look at the axis labeling, which you can also remove.
Of course, you can apply these tricks to statistical analysis of other types of time series as well. For example, if you want to refute global warming, look for a phase of decreasing global temperatures (these short-term phases exist despite long-term global warming) and extrapolate it. In Figure 1, for example, you can use the period between 1940 and 1975, in which the temperatures fell slightly overall, and draw a regression line with a negative slope. The regression line can obviously only poorly explain the temperature curve, because the annual mean and the moving 5-year mean fluctuate greatly. You can at least visually reduce the fluctuations by stretching the horizontal axis and compressing the vertical axis. If you only have the edited section of the graphic in front of you, you can get the impression that global warming is not taking place at all. But if you see the complete graph, you can see at a glance that the climate (with a few exceptions) has warmed up faster and faster and the global average temperature today is 1 ° C higher than the base period.
Temperature of the base period 1951-1980 was set at 0 ° C to illustrate the increase (source: https://de.wikipedia.org/wiki/Globale_Erw%C3%A4rmung#/)
Instead, would you prefer to invent a causal relationship that doesn't even exist? For example, let's say you want to get students to teach biology classes that storks bring babies. To prove your hypothesis, count the number of storks and babies in different counties for statistical analysis. You can use the correlation to calculate the degree of linear relationship between the variables. Indeed, you will find a positive correlation. That means, the more storks a district has, the more babies there are there. With the correlation, however, you have only demonstrated an indirect connection. In fact, both variables are related to rurality. Rural regions offer storks and families with children equally good living conditions. In contrast, the big cities tend to be populated by singles and storks find less nature there. The connection between storks and babies is therefore only a sham causality.
We have tabulated some typical sources of error for the methods listed above:
|method||possible sources of error|
|sample||No random sample; no representativeness|
|Data graphic||Misleading visualization|
|Data quality||Fake, incorrect or inappropriate data|
|Bogus causality||Confusion of correlation and causality|
|Internal validity||Interfering variables were not taken into account|
|External validity||Artificial laboratory conditions|
How to ensure external and internal validity
A high degree of internal and external validity must be ensured for serious research results.
A conclusion is internally valid if alternative explanations can be largely excluded. So the conclusion must be coherent in itself. The theory with storks and babies, for example, is not internally valid, as the connection between storks and babies is conveyed through rurality. To ensure internal validity, confounding variables should be checked and the test groups should be randomized.
A conclusion is externally valid if the study results can be generalized to the population. In addition, the results should be transferable to other places, times and situations and should not lose their validity outside the laboratory. For example, the calculation of the per capita consumption of wheat beer in Germany in this article was not externally valid because the sample carried out at the Oktoberfest is not representative of the population of the adult population in Germany. To ensure external validity, you should always ask yourself whether the group of subjects is representative of the population.
In clinical and epidemiological studies, the inclusion and exclusion criteria must be taken into account. Subjects participating in the study must meet all inclusion criteria. If, on the other hand, at least one exclusion criterion applies, the corresponding test person is excluded from the study and their measured values are not included in the statistical analysis. The results of clinical studies can only be transferred to patient groups that are comparable to the test subjects. If, for example, only young women with early stage cancer were treated in the clinical study, older men with end-stage cancer may not necessarily hope for a cure with the therapy carried out, even if it was successful in the clinical study.
A high response rate should be aimed for in surveys, so a large proportion of all respondents should fill out the questionnaire and send it back. Otherwise, you will receive distorted results, especially in the case of missing data of the type “Missing not at random”. For example, if you conduct a survey on depression, people who are severely depressed are more likely than those who are mildly depressed to complete the questionnaire because of their illness. The non-responder probability therefore depends on the variable to be examined, which then underestimates the occurrence and severity of depression in the population.
In this article you learned about common pitfalls in statistics. There are many reasons for incorrect or manipulated statistical analysis. Many scientists falsify their research results because positive results attract more recognition and money than negative ones. So data are falsified or statistical hypotheses are tested until the desired positive result is obtained. Or the actually true numbers are represented with all kinds of statistical tricks in such a way that they claim the opposite of what is actually true. But honest people can also unintentionally come to wrong conclusions and misinterpretations. So that you are not one of those people who fall into the trap of statistical errors, after reading this article you will now know what to look for when creating statistics: good data quality, the right sampling methodology for an unbiased estimate, that Avoid overly intensive processing or the removal of information from data graphics, questioning alleged causal relationships and compliance with internal and external validity.
If you want to make sure that your statistical project does not become distorted, simply ask the statisticians at Novustat for advice. Our experts can give you statistical advice and help you avoid popular statistical errors. And never forget when analyzing data: Do not trust any statistics that you have not falsified yourself!
- What's wrong with cadmium
- What's the Best Chicken Kebab Recipe
- How is the government elected in China?
- What are IMF goals
- What is Shopify Coded
- Will PHP exist in 2020
- How are casino poker dealers paid
- How to calculate 35 1 03
- How do most women successfully commit suicide
- How widespread is racism in Indonesia today
- Must disclose affiliate links
- Have you ever been to Penang Malaysia
- Everyone has a niche
- Professional esports players have high IQs
- How can someone be emotionally balanced
- How low will the yuan go
- What should everyone know about Honolulu
- Why are scented candles so expensive
- What does Australia need
- Which is the old temple in Chennai
- Why is the browser cache temporary
- What is 9586 0
- Is Haleakala worth seeing when visiting Maui
- What does the second amendment protect?
- Why do some people eat slowly
- How was righteousness served in the Bible
- Let's meet God after death
- What is room 6
- Empty space expands faster than matter
- You can buy a new motorcycle online
- What is there to love about China
- How does logic represent a paradox?
- How would you describe Amon Tobin's music
- Who owns ADP