How scientists determine if something is statistically significant or warrents more research.
P-value is a number researchers calculate to decide if the results of their experiment are significant or not. Here are the four steps to arrive at a p-value and what the number ultimately represents.
Start with a null hypothesis. That means you're starting with the idea that your study or experiment will NOT prove to be statistically significant. You're starting with the premise that your theory or hypothesis will be wrong.
Gather and interpret data. Run the experiment and collect the results.
Calculate the p-value. P stands for probability, and you're trying to figure out if the results of your experiment were statistically significant. P-value is a number between 0 and 1. If the p-value is LOWER than 0.05, that usually means the data from your experiment likely support your hypothesis, and you reject the null hypothesis. The LOWER the p-value, the better.
For example, A p-value of 0.04 means there's a 96% chance you've found something new and only a 4% chance you've encountered a statistical fluke. A p-value of 0.03 means there's a 97% chance of something significant and a 3% chance of a statistical anomaly.
Reject or confirm the null hypothesis. If you reject it, you've proven or discovered something interesting or different. If you confirm the null hypothesis, your study or experiment results show no difference between the two or more groups or situations you identified.
Here's the problem. Scientists don't get money for research unless they're discovering something. That means there's a great temptation to manipulate the results (called p-hacking) so that it APPEARS as if something significant has happened.
The good news is that's how science ultimately self-corrects. Because when you publish an experiment, all your data and methods are laid bare so others can critique and attempt to reproduce what you discovered. If other scientists can't confirm your results, eventually, they will be disregarded and replaced with more accurate information.
I don't tend to be on the bleeding edge when I write an article about a product or method. I like to wait until data comes in from multiple sources, verifying the approach, program or product.
Every year, hundreds of products are released that promise to help people lose fat, build muscle, and live healthier lives. If they have a big enough ad budget and lots of places selling them, they're likely to generate thousands or millions of dollars in profit. But the true test of something's effectiveness is if multiple experiments prove it works AND how long it remains in the marketplace.
Remember the Thighmaster? Shake weights? Sauna suits or Gravity Boots?
P-value is an important STARTING point for researchers, and you should DEFINITELY look for it when evaluating a scientific study. But just because it LOOKS statistically significant doesn't mean it ultimately IS significant. Multiple research areas are essential to prove something safe and effective.
You can learn more about p-value in the following videos.
Statistical Significance, the Null Hypothesis and P-Values Defined & Explained in [Two Minutes]
What is a P-Value and Why Does it Matter?
How to Calculate Probability Value (P-Value) in Excel | P-Value in Statistical Hypothesis Tests
Call for a FREE Consultation (305) 296-3434
CAUTION: Check with your doctor before
beginning any diet or exercise program.