- cross-posted to:
- science@lemmy.world
- cross-posted to:
- science@lemmy.world
To quote tburkhol
(https://lemmy.world/comment/21720958)
So, these scientists were asked to evaluate a political question, “Is there a link between immigration and welfare support?” using a large survey dataset. Not like they were asked whether temperature data supported anthropogenic climate change. The 158 scientists were in 71 teams and did, collectively, of 1200 statistical tests.
An overwhelming majority of all analyses found no link between immigration policies and support for welfare programs, regardless of investigator ideology. A handful of outlier models, where an effect could be found, show effects that correlated with the team’s politics, but it’s hard for me to look at the mountain of “no effect” conclusions and agree with the statement “politics predicted the results.” “Politics predicted the outliers,” OK.
Actual study: https://www.science.org/doi/10.1126/sciadv.adz7173
Very little that can be inferred indeed. They use linear regression to quantify correlations, that is they assume that relations are linear – and don’t even seem to justify why or how such an assumption should be valid. Good luck with that. Final blow is the use of p-values and “statistical significance”. To quote from the official statement by the American Statistical Association:
-
P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
-
Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
-
A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
-
By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
-



