Thinking about this more, the first test seems very strange. None of my professors ever mentioned where they got their doctorate from, the only one I know what school she went to was only because she was still using her alumni email address. Nor have I ever seen it mentioned in anybodies byline on a paper, its the school or research institute they're associated with now that people care about.
The second is strange too, what's the test for a physicist, test the things everybody agrees on? Anybody who graduates with a degree in physics should know those, with limits on the degree of specialization, that doesn't tell you if they're competent to do more than schoolwork. The peer review process? Doesn't seem to be helping economists.
The third is the only one that people everybody agrees are scientists would pass, damn near any field would get you drummed out if you got caught faking data, even economics.
The first test is silly. Although, (in theory
) someone with a PhD from a reputable university should be able to do independent research in physics, and likely already have (their thesis), and there is usually a certain quality of research coming out of an institution, this test has no place in determining what is 'real science'.
The test for "What is physics?" is "Does it empirically predict real results or explain a physical phenomenon?", and it's similar for most true sciences, even when you're talking about probabilistic things: it's a matter of it working and having real, reproducible results in legitimate experiments*. Good physicists are those whose teachings and conclusions meet this test, bad physicists are those whose teachings and conclusions do not.
By that test, psychology... might barely make the cut. Economics is right out. As it is, it seems like most social 'sciences' are just poorly done philosophy in disguise anyways.
*With a control and a single variation and no bias for sampling and etc. etc. etc. Basically, for probablistic things, it should still be reasonably statistically valid.