I find it annoying when alot of new PUA critize a method when they don't get the expected result the first time they use it.
First, just because you read about it does not mean you know it enough to implement it. It takes several readings to absorb all the information in a method.
Second. knowing is not the same as doing it. The more complex a task the more practice is needed. Performance is a matter if practice. It takes a lot of practice to be good at complex task.
Third, how would you know if it works because of the gambler fallacy. Doing it only a few times does not give you enough sampling to make a judgement call.
So when a newbie uses the new method and it fails, he blames the method. Amateurs. To be fair a method needs at least 100 approaches. 50 for learning/practice and 50 to get a large enough sampling size to get a performance evaluation.
What do you think about setting criterias to evaluate new information coming out of the community? I think it would bring more credibility and acceptance from the general population.
First, just because you read about it does not mean you know it enough to implement it. It takes several readings to absorb all the information in a method.
Second. knowing is not the same as doing it. The more complex a task the more practice is needed. Performance is a matter if practice. It takes a lot of practice to be good at complex task.
Third, how would you know if it works because of the gambler fallacy. Doing it only a few times does not give you enough sampling to make a judgement call.
So when a newbie uses the new method and it fails, he blames the method. Amateurs. To be fair a method needs at least 100 approaches. 50 for learning/practice and 50 to get a large enough sampling size to get a performance evaluation.
What do you think about setting criterias to evaluate new information coming out of the community? I think it would bring more credibility and acceptance from the general population.