No. It's not even a promille of the American population.
Sorry, statistics pet peeve here, that's not how sample size works. The size of your sample proportional to the population you drew the sample from (in this case, the population of America) is meaningless*. The absolute size of the sample is all that matters. 1,026 would be just as useful a number if the population of America was a million or a billion (or infinite).
If you only accept studies with a sample size of one-thousandth of the population of America you're going to be rejecting the vast majority of them.
*well, meaningless given the approximations implicit in modelling things by normal distributions and so on. If your population was 1,027, that tells you something more, but then you'd be using an entirely different set of mathematical methods.
To me, that would only make sense if the small group's demographic had relevance to the study. Like, if a drug company asked 1000 users of its medicine if there are side effects and over half reported positive, then in that situation the small sample size wouldn't matter. But asking 1000 random yobbos in a nation of several hundred million about whether political correctness is an issue and then claiming that the entire nation has spoken on the issue... that's a hard buy.
It's still true, though (not that 'the entire nation has spoken', exactly, but that you can get reliable information about the entire nation, if your sampling was truly random). So I'm gonna talk about statistics for a bit here to try to explain and if you're still not convinced we can take this to a different thread.
Imagine you have a coin. To make things interesting, this is not a fair coin which comes heads 50% of the time, it's biased and you have no idea how much. You flip it, and with some probability p you don't know the coin comes up heads (and tails with probability 1-p). You want to find p experimentally, so what you do is you start flipping the coin, and recording the results, and you approximate p by (number of heads)/(number of coin flips).
But that's an
approximation of p. What's the true value? Well, by the frequentist definition of probability, the proportion of heads in your sample if you took the infinite limit. You can't flip a coin infinitely many times, though. If you had flipped the coin 1000 times and obtained 729 heads, how confident would you be that p is close to 73%? (Real question, not rhetorical. Would appreciate it if you thought about it before reading the next part)
Ok, so now instead of a coin what you have is a yes/no question you want to ask people. What you want to know is the probability that a randomly selected person would say 'yes' to your question, or in other words the proportion of the population that believes the answer is 'yes'. So what you do is try to pick people, as randomly as possible, ask them the question and record the results. And again you approximate the value you want to know by the proportion of your sample that said 'yes'.
Do you see how this case is analogous to the coin case above? The procedure to obtain your approximate value is the same. The only difference is, if you're asking people questions then you have a maximum number of samples you can take (the size of the population), but you do not have a maximum number of coin flips you can make (in an imaginary scenario where we assume the coin doesn't degrade over time, etc). Effectively, the analogous number for population size in the coin flip scenario is infinity (which is not a number, I know, perfect rigour is for mathematicians and neither of us is one).
If you commit to saying you cannot trust a sample size because it's too small as a fraction of the population, this has the implication that, no matter how many times you flipped a coin, you would never have any confidence if it was a fair coin or a p=73% coin or whatever. Because in principle you can always flip it more times, as a fraction of the population any sample size is infinitesimal. You can't get a thousandth part, or a millionth part, or a billionth part. In effect, you are saying that no matter how many times you flip a coin and how carefully you record the results, you will have no confidence on your estimate of the probability.
If you sit down and figure out the model and do the math, it turns out that the error introduced by sampling an incomplete part of the population is proportional to one over the square root of the sample size, and
population size never enters the equation at all. This is not obvious and you'd probably need a stats class to get an intuitive understanding of it. But I hope the coin example can at least get across why it can't depend on (sample size)/(population size).
If not, I'm happy to start a new thread on this and see if I can convince you.