Content
Exact Tests on Counts
Exact tests on counts
Sign test
Fisher's exact test
Fisher's exact test (expanded)
r by c contingency table analysis
Liddell exact test for matched pairs
Confidence interval for 2 by 2 odds
Poisson rate confidence interval
Crosstabs
Exact tests on counts. 1
Sign test. 2
Fisher's exact test. 3
Fisher's exact test (expanded). 4
r by c contingency table analysis. 7
McNemar chi-square and exact test for matched pairs. 11
Confidence interval for 2 by 2 odds. 13
Poisson rate confidence interval. 14
Crosstabs. 15
·Sign test
·Fisher'sexact test for 2 by 2 tables
·GeneralisedFisher's exact and chi-square tests for r by c tables
·Matchedpairs (McNemar, Liddell)
·Confidencelimits for 2 by 2 odds
·Confidencelimits for Poisson rates and counts
Menu location: Analysis_Exact.
This section provides permutational probabilities and exact confidence limits forvarious counts and tables. Other sections such as nonparametricalso employ exact methods.
Please note that exact methodsare more appropriate than large sample methods for dealing with small numbers,but they are not a substitute for collecting larger samples. You should alwaysaim to collect the largest sample that it is practical to obtain.
Exact refers to the precision ofthe statistical method given perfect data and not to the quality of the results- largely dependent upon how good your experiment was.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Exact_Sign.
In a sample of n observations, ifr out of n show a change in one particular direction then the sign test can beused to assess the significance of this change. The value of interest is theproportion r/n.
The binomial distribution is usedto evaluate the probability that r/n exceeds an expected value of 0.5 (i.e.50:50, the chance of heads when tossing a coin). If you want to use an expectedvalue other than 0.5 then please see the singleproportion test (binomial test).
Null hypothesis: observedproportion is not different from 0.5
StatsDirect gives you one and two sided cumulativeprobabilities from a binomial distribution (based on an expected proportion of0.5) for the null hypothesis. A normal approximation is used with largenumbers. You are also given an exact confidence interval for the proportion r/n(Conover, 1999;Altman, 1991; Vollset, 1993).
Example
From Altman (1991 p. 186).
Out of a group of 11 womeninvestigated 9 were found to have a food energy intake below the daily averageand 2 above. We want to quantify the impact of 9 out of 11, i.e. how muchevidence have we got that these women are different from the norm?
To analysethese data in StatsDirect you must select the signtest from the exact tests section of the analysis menu. Then choose the default95% two sided confidence interval.
For this example:
For 11 pairs with 9 on one side.
Cumulative probability (2-sided) = 0.06543, (1-sided) =0.032715
Exact (Clopper-Pearson) 95%Confidence limits for the Proportion:
Lower Limit = 0.482244
Proportion = 0.818182
Upper Limit = 0.977169
If we were confident that thisgroup could only realistically be expected to have a lower caloric intake andwe would not be interested in higher caloric intakes then we could makeinference from the one sided P value. We do not, however, have evidence forsuch an assumption so we can not reject the null hypothesis that the proportionis not significantly different from 0.5. We can say with 95% confidence thatthe true population value of the proportion lies somewhere between 0.48 and0.98. The most sensible response to these results would be to go back andcollect more data.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Exact_Fisher.
Like the chi-square test forfourfold (2 by 2) tables, Fisher's exact test examines the relationship betweenthe two dimensions of the table (classification into rows vs. classificationinto columns). The null hypothesis is that these two classifications are notdifferent.
The P values in this test arecomputed by considering all possible tables that could give the row and columntotals observed. A mathematical short cut relates these permutations tofactorials; a form shown in many textbooks. StatsDirectuses the hypergeometric distribution for thecalculation (Conover1999). The test statistic that is hypergeometricallydistributed is the expected value of the first count A.
This exact treatment of thefourfold table should be used instead of the chi-square test when any expectedfrequency is less than 1 or 20% of expected frequencies are less than or equalto 5. With StatsDirect, it is reasonable to useFisher's exact test by default because the computational method used can copewith large numbers.
StatsDirect uses the definition of a two sided P value described by Bailey (1977) (Pvalues for all possible tables with P less than or equal to that for theobserved table are summed). Some authors prefer simply to double the one sidedP value (Armitageand Berry, 1994; Bland, 2000).
Consider using mid-P values and intervalswhen you have several similar studies within an overall investigation (Armitage and Berry,1994; Barnard, 1989). Mid-P results are not shown for very large tables; ifyou want to calculate mid-P for large numbers then please use the oddsratio confidence interval function.
Assumptions:
·each observation isclassified into exactly one cell
·the row and columntotals are fixed, not random
The assumption of fixed marginal(row/column) totals is controversial and causes disagreements such as the bestapproach to two sided inference from this test.
DATA INPUT:
Observed frequencies should beentered as a standard fourfold table:
feature present | feature absent | |
outcome positive | a | b |
outcome negative | c | d |
Example
From Armitage and Berry(1994, p. 138).
The following data comparemalocclusion of teeth with method of feeding infants.
Normal teeth | Malocclusion | |
Breast fed | 4 | 16 |
Bottle fed | 1 | 21 |
To analysethese data in StatsDirect you must select Fisher'sexact test from the exact tests section of the analysis menu. Enter thefrequencies into the contingency table on screen as shown above.
For this example:
Rearranged table:
4 | 1 | 5 |
16 | 21 | 37 |
20 | 22 | 42 |
Expectation of A = 2.380952
One sided (upper tail) P = 0.1435 (doubled one sided P =0.2871)
Two sided (by summation) P = 0.1745
One sided mid-P = 0.0809
Two sided mid-P = 0.1618
Here we cannot reject the nullhypothesis that there is no association between these two classifications, i.e.between feeding method and malocclusion.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Exact_Expanded Fisher.
This allows you to see aconventional Fisher's exact (Fisher-Irwin) test in more detail, including mid-Pvalues. The complete conditional distribution for the observed marginal totalsis displayed (Bailey,1977).
Like the chi-square test forfourfold (2 by 2) tables, Fisher's exact test examines the relationship betweenthe two dimensions of the table (classification into rows vs. classificationinto columns). The null hypothesis is that these two classifications are notdifferent.
The P values in this test arecomputed by considering all possible tables that could give the row and columntotals observed. A mathematical short cut relates these permutations tofactorials; a form shown in many textbooks. StatsDirectuses the hypergeometric distribution for thecalculation (Conover1999). The test statistic that is hypergeometricallydistributed is the expected value of the first count A.
This exact treatment of thefourfold table should be used instead of the chi-square test when any expectedfrequency is less than 1 or 20% of expected frequencies are less than or equalto 5. With StatsDirect, it is reasonable to useFisher's exact test by default because the computational method used can copewith large numbers.
StatsDirect uses the definition of a 2 sided P value described by Bailey (1977) (Pvalues for all possible tables with P less than or equal to that for theobserved table are summed). Many authors prefer to simply double the 1 sided Pvalue (Armitageand Berry, 1994; Bland, 2000).
Consider using mid-P values and intervalswhen you have several similar studies within an overall investigation (Armitage and Berry,1994; Barnard, 1989).
Assumptions:
·each observation isclassified into exactly one cell
·the row and columntotals are fixed, not random
The assumption of fixed marginal(row/column) totals is controversial and causes disagreements such as the bestapproach to two sided inference from this test.
DATA INPUT:
Observed frequencies should beentered as a standard fourfold table:
feature present | feature absent | |
outcome positive | a | b |
outcome negative | c | d |
Example
From Armitage and Berry(1994, p. 138).
The following data comparemalocclusion of teeth with type of feeding received by infants.
Normal teeth | Malocclusion | |
Breast fed | 4 | 16 |
Bottle fed | 1 | 21 |
To analysethese data in StatsDirect you must select theexpanded Fisher's exact test function from the exact tests section of theanalysis menu. Enter the frequencies into the contingency table on screen asshown above.
For this example:
Rearranged table:
4 | 1 | 5 |
16 | 21 | 37 |
20 | 22 | 42 |
Expectation of A = 2.380952
A | Lower Tail | Individual P | Upper Tail |
0 | 1.000000000000000 | 0.030956848030019 | 0.030956848030019 |
1 | 0.202939337085679 | 0.171982489055660 | 0.969043151969981 |
2 | 0.546904315196998 | 0.343964978111320 | 0.797060662914321 |
3 | 0.856472795497186 | 0.309568480300188 | 0.453095684803002 |
4 | 0.981774323237738 | 0.125301527740552 | 0.143527204502814 |
5 | 1.000000000000000 | 0.018225676762262 | 0.018225676762262 |
One sided (upper tail) P =0.1435, (doubled one sided P = 0.2871)
Two sided (by summation) P =0.1745
One sided mid-P = 0.0809
Two sided mid-P = 0.1618
Here we cannot reject the nullhypothesis that there is no association between these two classifications, i.e.between feeding mode and malocclusion.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Chi-Square_r by c.
The r by c chi-square test in StatsDirect uses a number of methods to investigate two waycontingency tables that consist of any number of independent categories formingr rows and c columns.
Tests of independence of thecategories in a table are the chi-square test, the G-square (likelihood-ratiochi-square) test and the generalised Fisher exact(Fisher-Freeman-Halton) test. All three testsindicate the degree of independence between the variables that make up thetable.
The generalisedFisher exact test is difficult to compute (Mehta and Patel,1983, 1986a); it may take a long time and it may not be computed for thetable that you enter. If the Fisher exact method cannot be computed practicallythen a hybrid method based upon Cochrane rules is used (Mehta and Patel,1986b); this may also fail with large tables and/or numbers. TheFisher-Freeman-Halton result is quoted with just oneP value as it is implicitly two-sided.
Relating the Fisher-Freeman-Halton statistic to the Pearson Chi-square statistic:
·The null hypothesisis independence between row and column categories.
·Let t denotea table from the set of all tables with the same row and column margins.
·Let D(t) be the measure ofdiscrepancy.
·The exact two sided P value = P [D(t)>= D(t observed)] = sum of hypergeometricprobabilities of those tables where D(t) is larger than or equal to theobserved table.
·In large samples the distribution of D(t) conditional onfixed row and column margins converges to the chi-square distribution with(r-1)(c-1) degrees of freedom.
The G-square statistic is lessreliable than the chi-square statistic when you have small numbers. In general,you should use the chi-square statistic if the Fisher exact test is notcomputable. If you consult a statistician then it would be useful to providethe G-square statistic also.
These tests of independence aresuitable for nominal data. If your data are ordinal then you should use themore powerful tests for trend (Armitage and Berry,1994; Agresti, 2002, 1996).
Assumptions of the tests ofindependence:
·the sample israndom
·each observationmay be classified into one cell (in the table) only
- where,for r rows and c columns of n observations, O is an observed frequency and E isan estimated expected frequency. The expected frequency for any cell isestimated as the row total times the column total then divided by the grandtotal (n).
- where P is the two sided Fisherprobability, Pf is the conditional probability for the observed table givenfixed row and column totals (fi. and f.j respectively), f.. is thetotal count and ! represents factorial.
Analysis of trend in r by ctables indicates how much of the general independence between scores isaccounted for by linear trend. StatsDirect usesequally spaced scores for this purpose unless you specify otherwise. If youwish to experiment with other scoring systems then expert statistical guidanceis advisable. Armitageand Berry (1994) quote an example where extent of grief of motherssuffering a perinatal death, graded I to IV, iscompared with the degree of support received by these women. In this examplethe overall statistic is non-significant but a significant trend isdemonstrated.
- where,for r rows and c columns of n observations, O is an observed frequency and E isan estimated expected frequency. The expected frequency for any cell isestimated as the row total times the column total then divided by the grandtotal (n). Row scores are u, column scores are v, row totals are Oj+ and column totals are Oi+.
The sample correlationcoefficient r reflects the direction and closeness of linear trend in yourtable. r may vary between -1 and 1 just like Pearson'sproduct moment correlation coefficient. Total independence of the categories inyour table would mean that r = 0. The test for linear trend is related to r byM²=(n-1)r² and this is numerically identical to Armitage's chi-square for linear trend (Armitage and Berry,1994; Agresti, 1996). If you interchange the rows and columns in your tablethen the value of M² will be the same
The ANOVA output appliestechniques similar to analysis of variance to an r by c table. Here theequality of mean column and row scores is tested. StatsDirectuses equally spaced scores for this purpose unless you specify otherwise. See Armitage for more information (Armitage and Berry,1994).
Pearson's and Cramér's(V) coefficients of contingency and the phi (f, correlation)coefficient reflect the strength of the association in a contingency table (Agresti, 1996;Fleiss, 1981; Stuart and Ord, 1994):
For 2 by 2 tables, Cramér's V is calculated alternatively as a signedvalue:
Observed values, expected valuesand totals are given for the table when c £ 8 and r £ 10.
If your datacategories are both ordered then you will gain more power in tests ofindependence by using the ordinal methods due to Goodman and Kruskal (gamma) and Kendall(tau-b). Large sample, asymptotically normalvariance estimates are used; the simple form is used for independence testing (Agresti, 1984;Conover, 1999; Goodman and Kruskal, 1963, 1972). Tau-btends to be less sensitive than gamma to the choice of response categories.
Example
From Armitage and Berry(1994, p. 408).
The following data (as above)describe the state of grief of 66 mothers who had suffered a neonatal death.The table relates this to the amount of support given to these women:
Support | ||||
Good | Adequate | Poor | ||
Grief State | I | 17 | 9 | 8 |
II | 6 | 5 | 1 | |
III | 3 | 5 | 4 | |
IV | 1 | 2 | 5 |
To analysethese data in StatsDirect you must select r by c fromthe chi-square section of the analysis menu. Choose the default 95% confidenceinterval. Then enter the above data as directed by the screen.
For this example:
Observed | 17 | 9 | 8 | 34 |
Expected | 13.91 | 10.82 | 9.27 | |
DChi² | 0.69 | 0.31 | 0.17 | |
Observed | 6 | 5 | 1 | 12 |
Expected | 4.91 | 3.82 | 3.27 | |
DChi² | 0.24 | 0.37 | 1.58 | |
Observed | 3 | 5 | 4 | 12 |
Expected | 4.91 | 3.82 | 3.27 | |
DChi² | 0.74 | 0.37 | 0.16 | |
Observed | 1 | 2 | 5 | 8 |
Expected | 3.27 | 2.55 | 2.18 | |
DChi² | 1.58 | 0.12 | 3.64 | |
Totals: | 27 | 21 | 18 | 66 |
TOTAL number of cells = 12
WARNING: 9 out of 12 cells have 1£EXPECTATION < 5
NOMINAL INDEPENDENCE
Chi-square = 9.9588, DF = 6, P =0.1264
G-square = 10.186039, DF = 6, P =0.117
Fisher-Freeman-Halton exact P = 0.1426
ANOVA
Chi-square for equality of meancolumn scores = 5.696401
DF = 2, P = 0.0579
LINEAR TREND
Sample correlation (r) = 0.295083
Chi-square for linear trend (M²)= 5.6598
DF = 1, P = 0.0174
NOMINAL ASSOCIATION
Phi = 0.388447
Pearson's contingency = 0.362088
Cramér's V = 0.274673
ORDINAL
Goodman-Kruskalgamma = 0.349223
Approximate test of gamma =0: SE = 0.15333, P = 0.0228, 95% CI = 0.048701 to 0.649744
Approximate test ofindependence: SE = 0.163609, P = 0.0328, 95% CI = 0.028554 to 0.669891
Kendall tau-b = 0.236078
Approximate test of tau-b = 0: SE = 0.108929, P = 0.0302, 95% CI = 0.02258to 0.449575
Approximate test ofindependence: SE = 0.110601, P = 0.0328, 95% CI = 0.019303 to 0.452852
Here we see that although theoverall test was not significant we did show a statistically significant trendin mean scores. This suggests that supporting these mothers did help lessentheir burden of grief.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirect Limited,all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Exact_Matched Pairs.
Paired proportions havetraditionally been compared using McNemar's test butan exact alternative due to Liddell (1983)is preferable. StatsDirect gives you both.
The exact test is a special caseof the sign test. The b count in the table below is treated as a binomialvariable from the sample b+c. Using the ratio R' (R'= b/c) as a point estimate of relative risk, a two sided probability iscalculated that R' = 1 (the null hypothesis). The test statistic F=b/(c+1).
Confidence limits for R' arecalculated as follows:
- whereF(P,n,d) is a quantile fromthe F distribution with n and d degrees of freedom.
You should use the exact test foranalysis; McNemar's test is included for interestonly.
If you need the exact confidenceinterval for the difference between a pair of proportions then please see pairedproportions.
DATA INPUT:
Observed frequencies should beentered as a paired fourfold table:
Control/reference category: | |||
outcome present | outcome absent | ||
Case/index category | outcome present | a | b |
outcome absent | c | d |
Example
From Armitage and Berry(1994, p. 127).
The data below represent acomparison of two media for culturing Mycobacterium tuberculosis. Fifty suspectsputum specimens were plated up on both media and the following results wereobtained:
Medium B | |||
Growth | No Growth | ||
Medium A: | Growth | 20 | 12 |
No Growth | 2 | 16 |
To analysethese data in StatsDirect you must select the matchedpairs (McNemar, Liddell) from the chi-square sectionof the analysis menu. Select the default 95% confidence interval. Enter thecounts into the table as shown above.
For this example:
McNemar'stest:
Yates' continuity corrected Chi² = 5.785714 P = 0.0162
After Liddell (1983)Refs:
Point estimate of relative risk (R') = 6
Exact 95% confidence interval = 1.335744 to 55.197091
F = 4 P (two sided) = 0.0129
R' is significantly different from unity
Here we can conclude that thetubercle bacilli in the experiment grew significantly better on medium A thanon medium B. With 95% confidence we can state that the chances of a positiveculture are between 1.34 and 55.20 times greater on medium A than on medium B.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Exact_Odds Ratio CI.
Odds = probability / (1 -probability) therefore odds can take on any value between 0 and infinitywhereas probability may vary only between 0 and 1. Odds and log odds aretherefore better suited than probability to some types of calculation.
Odds ratio (OR) is related torisk ratio (RR, relative risk):
RR = (a / (a+c))/ (b / (b+d))
When a issmall in comparison to c and b is small in comparison to d (i.e. relativelysmall numbers of outcome positive observations or low prevalence) then c can besubstituted for a+c and d can be substituted for d+b in the above. With a little rearrangement this givesthe odds ratio (cross ratio, approximate relative risk):
OR = (a*d)/(b*c).
OR can therefore be related to RRby:
RR = 1/(BR+(1-BR)/OR)
..whereBR is the baseline (control) response rate中国卫生人才网; BR can be estimated by b/(b+d) if not known from larger studies.
This function uses an exact methodto construct confidence limits for the odds ratio of a fourfold table (Martin and Austin,1991). The Fisher limits complement Fisher's exact test of independence ina fourfold table, for which one and two sided probabilities are provided here. Mid-P values are alsogiven.
Please note that this method willtake a long time with large numbers.
DATA INPUT:
Observed frequencies should beentered as a standard fourfold table:
Feature present | feature absent | |
outcome positive | a | b |
outcome negative | c | d |
sample estimate of the odds ratio = (a*d)/(b*c)
Example
From Thomas (1971).
The following data look at thecriminal convictions of twins in an attempt to investigate some of thehereditability of criminality.
Monozygotic | Dizygotic | |
Convicted | 10 | 2 |
Not-convicted | 3 | 5 |
To analysethese data in StatsDirect you must select exactconfidence limits for 2 by 2 odds from the exact tests section of the analysismenu. Choose the default 95% two sided confidence interval.
For this example:
Confidence limits with 2.5% lowertail area and 2.5% upper tail area two sided:
Observed odds ratio = 25
Conditional maximum likelihoodestimate of odds ratio = 21.305318
Exact Fisher 95% confidenceinterval = 2.753383 to 301.462338
Exact Fisher one sided P =0.0005, two sided P = 0.0005
Exact mid-P 95% confidence interval= 3.379906 to 207.270568
Exact mid-P one sided P = 0.0002,two sided P = 0.0005
Here we can say with 95%confidence that one of a pair of identical twins who has a criminal convictionis between 2.75 and 301.5 times more likely than non-identical twins to have aconvicted twin.
P values
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Rates_Poisson Rate CI.
Uncommon events in populations,such as the occurrence of specific diseases, are usefully modelledusing a Poissondistribution. A common application of Poisson confidence intervals is to incidencerates of diseases (Gail and Benichou,2000; Rothman and Greenland, 1998; Selvin, 1996).
The incidence rate is estimatedas the number of events observed divided by the time at risk of event duringthe observation period.
Technical validation
Exact Po国家医学考试网isson confidence limitsfor the estimated rate are found as the Poisson means, for distributions withthe observed number of events and probabilities relevant to the chosenconfidence level, divided by time at risk. The relationship between the Poissonand chi-square distributions is employed here (Ulm, 1990):
- where Y is the observed numberof events, Yl and Yu are lower and upper confidencelimits for Y respectively, c²n,a is the chi-square quantile for upper tailprobability a on n degrees of freedom.
Example
Say that 14 events are observedin 200 people studied for 1 year and 100 people studies for 2 years.
The person time at risk is 200 +100 x 2 = 400 person years
For this example:
Events observed = 14
Time at risk of event = 400
Poisson (e.g. incidence) rateestimate = 0.035
Exact 95% confidence interval =0.019135 to 0.058724
Here we can say with 95%confidence that the true population incidence rate for this event lies between0.02 and 0.06 events per person year.
See also incidence ratecomparisons
confidenceintervals
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Crosstabs.
This a two orthree way cross tabulation function. If you havetwo columns of numbers that correspond to different classifications of the sameindividuals then you can use this function to give a two way frequency tablefor the cross classification. This can be stratified by a third classificationvariable.
For two way crosstabs, StatsDirect offers a range of analyses appropriate to thedimensions of the contingency table. For more information see chi-squaretests and exacttests.
For three way crosstabs, StatsDirect offers either odds ratio(for case-control studies) or relative risk(for cohort studies) meta-analyses for 2 by 2 by k tables, and generalisedCochran-Mantel-Haenszel tests for r by c by k tables.
Example
A database of test scorescontains two fields of interest, sex (M=1, F=0) and grade of skin reaction toan antigen (none = 0, weak + = 1, strong + = 2). Here is a list of those fieldsfor 10 patients:
Sex | Reaction |
0 | 0 |
1 | 1 |
1 | 2 |
0 | 2 |
1 | 2 |
0 | 1 |
0 | 0 |
0 | 1 |
1 | 2 |
1 | 0 |
In order to get a crosstabulation of these from StatsDirect you should enterthese data in two workbook columns. Then choose crosstabs from the analysismenu.
For this example:
Reaction | ||||
0 | 1 | 2 | ||
Sex | 0 | 2 | 2 | 1 |
1 | 1 | 1 | 3 |
We could then proceed to an r byc (2 by 3) contingencytable analysis to look for association between sex and reaction to thisantigen:
Contingency table analysis
Observed | 2 | 2 | 1 | 5 |
% of row | 40% | 40% | 20% | |
% of col | 66.67% | 66.67% | 25% | 50% |
Observed | 1 | 1 | 3 | 5 |
% of row | 20% | 20% | 60% | |
% of col | 33.33% | 33.33% | 75% | 50% |
Total | 3 | 3 | 4 | 10 |
% of n | 30% | 30% | 40% |
TOTAL number of cells = 6
WARNING: 6 out of 6 cells haveEXPECTATION < 5
NOMINAL INDEPENDENCE
Chi-square = 1.666667, DF = 2, P= 0.4346
G-square = 1.726092, DF = 2, P =0.4219
Fisher-Freeman-Halton exact P = 0.5714
ANOVA
Chi-square for equality of meancolumn scores = 1.5
DF = 2, P = 0.4724
LINEAR TREND
Sample correlation (r) = 0.361158
Chi-square for linear trend (M²)= 1.173913
DF = 1, P = 0.2786
NOMINAL ASSOCIATION
Phi = 0.408248
Pearson's contingency = 0.377964
Cramér's V = 0.408248
ORDINAL
Goodman-Kruskalgamma = 0.555556
Approximate test of gamma = 0: SE= 0.384107, P = 0.1481, 95% CI = -0.197281 to 1.308392
Approximate test of independence:SE = 0.437445, P = 0.2041, 95% CI = -0.301821 to 1.412932
Kendall tau-b = 0.348155
Approximate test of tau-b = 0: SE = 0.275596, P = 0.2065, 95% CI = -0.192002 to0.888313
Approximate test of independence:SE = 0.274138, P = 0.2041, 95% CI = -0.189145 to 0.885455