Saturday, May 2, 2020

Quantitative Designs

Questions: 1. lMrat is an F-ratio? Define all the technical terms in your answer.2. IMat is eror variance and how is it calculated?3. Why would anyone ever want more than two (2) levels of an independent variable?4. lf you were doing a study to see if a treaEnent causes a s(;nificant effect, what would it mean if within groups, variance was higher than between groups variance? lf between groups variance was hQher than within groups variance? Explain your answer5. l/vhat is the purpose of a post-hoc test with analysis of variance?6. What is probabilistic equivalence? Mly is it important? Answers: 1. F-ratio and the technical terms F-ratio is the statistic used to test the hypothesis in ANOVA test. F-ratio is used to test whether the two variances are equal or not. F-ratio is defined as the ratio of the variance between the groups to the ratio within the groups. When the hypothesis testing is done using F-distribution, F-ratio is used as the statistic of the hypothesis test (Cardinal Aitken, 2013). F-ratio can also be defined as the ratio of the explained variance to the ratio of the unexplained variance (Gravetter Wallnau, 2016). F-statistic is mainly used when the samples follow normal distribution. F ratio is written as follows: F = MSB/ MSW, where MSB = mean sum of squares between the groups and MSW = mean sum of squares within the group. MSB = mean sum of squares between the groups is defined as the sum of squares of all the variables between each group. This is calculated by dividing the sum of squared differences on each group by the degree of freedom (Hayes Preacher, 2014). Here, the degrees of freedom are the (number of groups 1). The formula for MSB = SSB/ (n-1), where n is the number of groups and SSB is the sum of the squared differences between groups. MSW= mean sum of squares within group is defined as the total of squares of all the variables within every group. This is calculated by dividing the sum of squared differences of each observation from the mean of the data set by the total number of groups multiplies by one less number of observation (Imbens Kolesar, 2012). The formula for MSW = SSW/ n (a-1), where n is the number of groups, a is the total number of observations and SSW is the sum of the squared differences within group (Lomax Hahs-Vaughn, 2013). 2. Error variance and its calculation Error variance is defined as the variance of the residuals. Error variance gives the variance of the errors of the data set. It is measured by the sum of squares of errors divided by two less than total number of observations (RAO, 2013). This means that error variances give an idea about the deviation of the estimators from what is estimated (Mahboub, 2014). The error variance is calculated as follows: Error variance = i ( yi - yi^)2 /(n-2), where yi are the observations. The error variance gives an estimate about the errors from the calculation. It helps to know the properties of the errors (Ferreira et al., 2015). 3. Reasons for more than two levels of independent variables To compare the independent variables, the first criterion is to divide the independent variables into various groups or levels according to some common criteria (Draper Smith, 2014). The comparisons and analysis are done between and within these groups. It is necessary to have more than two levels of independent variables. This is because; there must be one group that is controlled. This group would be known as the controlled group. The controlled group would have all the required criteria satisfied by the observations of the group. Rest of the levels of independent variables would not have all the criteria satisfied by the observations. Each level would have one or more desired criteria missing from the level (Anderberg, 2014). This would help to compare each level with the controlled level and know the effect of the criteria. Having more than two levels of independent variables would help to know the effect of different criteria on the observed value and it would help to draw a be tter conclusion about the factors and their effects. The degree of effectiveness of the factors can also be calculated on having more than two levels of independent variables. The effect of the factor when it is correlated with other factors can also be calculated from the on having more than two levels of independent variables. Thus, it is important to have more than two levels of independent variables (Orlci, 2013). 4. Interpretation of the fact that if within group variance is greater than between group variance and vice versa. Within group variance and between group variance are the two most important calculations required for analysis of variance. These two types of variance play an important role in accepting or rejecting the null hypothesis. When the within group variance is more than the between group variance, then the F ratio would be small (Bruijn et al., 2014). This means that the null hypothesis of the problem would be accepted. On testing if the treatment had a significant effect, the null hypothesis was considered that there was no significant effect of the treatment and the alternative hypothesis was that the treatment had significant effect. If the within group variance was less than the between group variance then the null hypothesis would be accepted; i.e. there is no significant effect of the treatment. If the between group variance was more than the within group variance, then the value of the F ratio would increase. This would lead to the rejection of null hypothesis. On considering the a bove hypothesis, it can be interpreted that when the between group variance is larger than within group variance, the alternative hypothesis would be accepted and there would be significant effect of the treatment. 5. Purpose of post hoc test with analysis of variance Post hoc test is done only after a significant ANOVA test. On performing the analysis of variance if the F value is large, then the null hypothesis is rejected. This signifies that there is a difference in the means between the groups. This means that there is at least one group whose mean differs from the other group means. It would be needed to be examined which particular pair of group means shows the difference and which pairs do not show the differences. Post hoc test helps to identify these particular pairs of group means (Werdan et al., 2015). Post hoc test is used to identify the patterns and relationships between the pair of groups of the sampled population which would have otherwise remained undetected by different statistical methods. Post hoc test is an important part of multivariate hypothesis. Without post hoc analysis there would have been high chances of accepting false hypothesis (Feng Zhang, 2014). 6. Probabilistic equivalence and its importance No two individuals or groups are equal. The term probabilistic denotes that there is a equivalence in terms of probabilities. Precisely, probabilistic equivalence means that there would be a high chance of finding difference between the two groups. This means that the odds are known that the two groups would not be equal. Probabilistic equivalence is achieved by assigning the random variables randomly into two groups (Grzymala-Busse et al., 2014). Then the probability that the two groups would be unequal is calculated as there was a random assignment of the variables. This difference among the two groups is due to the assignments of random numbers which would be done to the two groups. On assigning the observations randomly into two groups, it was pre assumed that there would be equivalence among them so they are not expected to be equal. The importance of probabilistic equivalence is that there would be difference among the groups and their probability of this difference is found ou t using the probabilistic equivalence. Thus, this is probabilistic equivalence and its importance. References Anderberg, M. R. (2014). Cluster analysis for applications: probability and mathematical statistics: a series of monographs and textbooks (Vol. 19). Academic press. Bruijn, M., van Baaren, G. J., Vis, J., van Straalen, J., Wilms, F., Oudijk, M., ... Spaanderman, M. (2014). 740: Comparison of the Actim Partus test and fetal fibronectin test in combination with cervical length in the prediction of spontaneous preterm delivery in symptomatic women: a post-hoc analysis. American Journal of Obstetrics Gynecology, 210(1), S363-S364. Cardinal, R. N., Aitken, M. R. (2013). ANOVA for the behavioral sciences researcher. Psychology Press. Draper, N. R., Smith, H. (2014). Applied regression analysis. John Wiley Sons. Feng, Y., Zhang, L. (2014). When equivalence and bisimulation join forces in probabilistic automata. In FM 2014: Formal Methods (pp. 247-262). Springer International Publishing. Ferreira, F. A., Jalali, M. S., Ferreira, J. J. (2015). Integrating qualitative comparative analysis (QCA) and fuzzy cognitive maps (FCM) to enhance the selection of independent variables. Journal of Business Research. Gravetter, F., Wallnau, L. (2016). Statistics for the behavioral sciences. Cengage Learning. Grzymala-Busse, J. W., Clark, P. G., Kuehnhausen, M. (2014). Generalized probabilistic approximations of incomplete data. International Journal of Approximate Reasoning, 55(1), 180-196. Hayes, A. F., Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67(3), 451-470. Imbens, G. W., Kolesar, M. (2012). Robust standard errors in small samples: Some practical advice. Review of Economics and Statistics, (0). Lomax, R. G., Hahs-Vaughn, D. L. (2013). Statistical concepts: A second course. Routledge. Mahboub, V. (2014). Variance component estimation in errors-in-variables models and a rigorous total least-squares approach. Studia Geophysica et Geodaetica, 58(1), 17-40. Orlci, L. (2013). Multivariate analysis in vegetation research. Springer. RAO, J. (2013). Estimation of nonsampling variance components in sample surveys. Survey sampling and measurement, 35. Werdan, K., Ebelt, H., Nuding, S., Hpfner, F., Stckl, G., Mller-Werdan, U., ADDITIONS Study Investigators. (2015). Ivabradine in Combination with Metoprolol Improves Symptoms and Quality of Life in Patients with Stable Angina Pectoris: A post hoc Analysis from the ADDITIONS Trial. Cardiology, 133(2), 83-90.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.