The series so far:
- Statistics in SQL: Pearson’s Correlation
- Statistics in SQL: Kendall’s Tau Rank Correlation
- Statistics in SQL: Simple Linear Regressions
- Statistics in SQL: The Kruskal–Wallis Test
- Statistics in SQL: The Mann–Whitney U Test
- Statistics in SQL: Student's T Test
Imagine you have some data. Maybe you’ve done a web redesign and you want to do some A/B testing to see if by redesigning the page your visitors spend more time reading your content. You set up an experiment in which visitors are randomly sent to either the old or new design, and you measure how long they spend engaged with the content. (you try to ensure that all other factors besides the design of the website are ‘controlled’, meaning that they are equal for both samples. Is the new design significantly better than the old one? You need to disprove the ‘null hypothesis’ that the two samples, from those who got the new design and those who got the old one are from the same population.
There are several ways that you can test this, but nobody is going to argue with you if you use a Mann–Whitney U test to test whether two samples come from the same distribution. It doesn’t require that the data has any particular type of distribution. It just requires that each observation is done by a different member of the population so that all the observations from both groups are independent of each other. It is really just a test of differences in mean-rank between two populations’ pooled ranking. To test this difference It has to be possible to compare any of the observations with any of the others and say which of the two are greater. Your objective is to disprove the assumption that The distributions of both populations are equal. Calculating a measure of the difference is simple, and was designed to be done easily by hand before computers. The probability that the observed difference occurred by chance is easily calculated for large samples because U then approximates to the normal distribution, but it is complex for small samples. Here, we have a small sample and are just interested in whether the two-tailed test is signifcant at the five percent level so we dodge the bullet by using a significance lookup table for the critical value of U.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
SET NOCOUNT on /* we'll start with our table of critical values for U for small values. Our test is two-tailed and we are interested in the 5% level so we just need that. */ DECLARE @TwoTailedCriticalValues TABLE ( UValue INT NOT NULL, N1 INT NOT NULL, N2 INT NOT NULL ); INSERT INTO @TwoTailedCriticalValues(UValue,N2,N1) values (0,8,2), (0,9,2), (0,10,2), (0,11,2), (1,12,2), (1,13,2), (1,14,2), (1,15,2), (1,16,2), (2,17,2), (2,18,2), (2,19,2), (2,20,2), (0,5,3), (1,6,3), (1,7,3), (2,8,3), (2,9,3), (3,10,3), (3,11,3), (4,12,3), (4,13,3), (5,14,3), (5,15,3), (6,16,3), (6,17,3), (7,18,3), (7,19,3), (8,20,3),(0,5,4), (1,6,4), (2,7,4), (3,8,4), (4,9,4), (4,10,4), (5,11,4), (6,12,4), (7,13,4), (9,14,4), (10,15,4), (11,16,4), (11,17,4), (12,18,4), (13,19,4), (14,20,4),(2,5,5), (3,6,5), (5,7,5), (6,8,5), (7,9,5), (8,10,5), (9,11,5), (11,12,5), (12,13,5), (13,14,5), (14,15,5), (15,16,5), (17,17,5), (18,18,5), (19,19,5), (20,20,5), (5,6,6), (6,7,6), (8,8,6), (10,9,6), (11,10,6), (13,11,6), (14,12,6), (16,13,6), (17,14,6), (19,15,6), (21,16,6), (22,17,6), (24,18,6), (25,19,6), (27,20,6),(8,7,7), (10,8,7), (12,9,7), (14,10,7), (16,11,7), (18,12,7), (20,13,7), (22,14,7), (24,15,7), (26,16,7), (28,17,7), (30,18,7), (32,19,7), (34,20,7),(13,8,8), (15,9,8), (17,10,8), (19,11,8), (22,12,8), (24,13,8), (26,14,8), (29,15,8), (31,16,8), (34,17,8), (36,18,8), (38,19,8), (41,20,8),(17,9,9), (20,10,9), (23,11,9), (26,12,9), (28,13,9), (31,14,9), (34,15,9), (36,16,9), (39,17,9), (42,18,9), (45,19,9), (48,20,9), (23,10,10), (26,11,10), (29,12,10), (33,13,10), (36,14,10), (39,15,10), (42,16,10), (45,17,10), (48,18,10), (52,19,10), (55,20,10), (30,11,11), (33,12,11), (37,13,11), (40,14,11), (44,15,11), (47,16,11), (51,17,11), (55,18,11), (55,19,11), (62,20,11),(37,12,12), (41,13,12), (45,14,12), (49,15,12), (53,16,12), (57,17,12), (61,18,12), (65,19,12), (69,20,12), (45,13,13), (50,14,13), (54,15,13), (59,16,13), (63,17,13), (67,18,13), (72,19,13), (76,20,13), (55,14,14), (59,15,14), (64,16,14), (69,17,14), (74,18,14), (78,19,14), (83,20,14),(64,15,15), (70,16,15), (75,17,15), (80,18,15), (85,19,15), (90,20,15), (75,16,16), (81,17,16), (86,18,16), (92,19,16), (98,20,16), (87,17,17), (93,18,17), (99,19,17), (105,20,17),(113,19,19),(119,20,19), (127,20,20) /* Now we can add our data. here is the number of seconds spent on the page with either the old design (B) or the new design (A). The 'observation' is the number of seconds spent on the page. There is a fair range but not normally distributed. Some people get to the page and soon realise it is not relevant for them, whereas others spend a long time on it. With this test, the first thing you do is to rank these results so the actual measurement is irrelevant. You can do a U test on just ranked data for the two groups. The 'observation' is only used to achieve the ranking */ DECLARE @ourResults TABLE --where we hold our results ( TheGroup CHAR(1), Observation INT NOT NULL ); INSERT INTO @ourResults ( Observation,TheGroup) VALUES (3,'B'), (3,'A'), (3,'B'), (9,'B'), (9,'A'), (21,'B'), (25,'B'), (37,'A'), (45,'B'), (56,'B'), (75,'A'), (88,'B'), (101,'B'), (120,'A'), (120,'A'), (156,'B'), (206,'A'), (269,'A'), (269,'B'), (291,'B'), (308,'A'), (321,'A'), (346,'A'), (400,'A'), (421,'A'), (431,'A') DECLARE @tx NUMERIC(8, 2), @nx INT, @Ntotal INT, @UValue INT; --We now rank all the observations in order regardless of The group DECLARE @ourRanking TABLE ( TheGroup CHAR(1), Observation INT NOT NULL, TheRank NUMERIC(8, 2) NOT NULL ); --insert into the temporary table the ranked observations INSERT INTO @ourRanking (TheGroup, Observation, TheRank) SELECT TheGroup, Observation, Rank() OVER (ORDER BY [@ourResults].Observation) AS TheRank FROM @ourResults; --and remember how many observations there were in total SELECT @Ntotal = @@RowCount; --now deal with ties to give them equal shared rank so that joint first would both be 1.5 --and three tied in first would all be 2 UPDATE @ourRanking SET TheRank = [@ourRanking].TheRank + ((ties.NumberOfTies - 1.00) / 2) FROM (SELECT Count(*) AS NumberOfTies, TheRank FROM @ourRanking GROUP BY TheRank ) AS ties INNER JOIN @ourRanking ON [@ourRanking].TheRank = ties.TheRank; SELECT * FROM @ourRanking AS ORA SELECT TOP 1 @tx = Sum(TheRank), @nx = Count(*) FROM @ourRanking GROUP BY TheGroup ORDER BY Sum(TheRank) DESC; --Now we can simply calculate the Mann Whitney U value SELECT @UValue = @nx * (@Ntotal - @nx) + @nx * ((@nx + 1) / 2.00) - @tx; IF @Ntotal - @nx<=20 AND @Ntotal - @nx<=20 --and see if it is significant... SELECT 'The U score was '+CONVERT(varchar(5),@UValue) +' from sample sizes of '+CONVERT(varchar(5),@nx)+' and ' + Convert(varchar(5),@Ntotal - @nx) +' observations, the critical U score for 5% significance level was ' +CONVERT(varchar(5),CriticalValue)+', so the samples were' + CASE WHEN CriticalValue<@UValue THEN ' NOT' ELSE '' END +' from significantly different populations' FROM ( SELECT @UValue AS UValue, CASE WHEN @Ntotal - @nx > @Ntotal / 2 THEN (SELECT UValue FROM @TwoTailedCriticalValues WHERE N1 = @nx AND N2 = @Ntotal - @nx) ELSE (SELECT UValue FROM @TwoTailedCriticalValues WHERE N1 = @Ntotal - @nx AND N2 = @nx) END AS CriticalValue)f; ELSE SELECT 'The U score was '+CONVERT(varchar(5),@UValue) +' from sample sizes of '+CONVERT(varchar(5),@nx)+' and ' + Convert(varchar(5),@Ntotal - @nx) +' observations. You''ll need to calculate the Z and p values' ; |
This, when executed in SQL Server yields this result …
1 |
The U score was 42 from sample sizes of 14 and 12 observations, the critical U score for 5% significance level was 45, so the samples were from significantly different populations |
With all this series, I’m not giving you the cut n’ paste formula for doing production stats, The aim is to provide something you can experiment with, even if it just changing the figures and trying things out to see how easy it is to get from statistics theory to practice. It is much better to do this- even work things out on paper – before you get to crank the handle of a package such as R. There is too much magic in doing that if you don’t first get a feel for what is really being done, and what the assumptions are. The real expertise of statistics is in ensuring that the design of the test is right and in choosing the right statistical method for the task. Otherwise a tool like R merely gets you to the wrong answer quicker.
Load comments