Above is a plot of the height vs time for a sphere falling through air both predicted (black) and drag free (red). As you can see by drawing a horizontal line at h = 0.0 there should be a significant effect, 50% longer fall time or so. t=1 corresponds to 0.534 seconds (shown in the title). On the other hand, the actual data shows nothing like these fall times. In fact thae data shows t' values < 1 which suggests maybe the Android based stopwatch + user reaction time / anticipation is a big issue. Uncertainty abounds, but it's not really that interesting uncertainty (we know the time measurement method is crude). If we want to find out about drag we may very well need much more accurate time measurements, or drops from higher heights.

Rama suggested using an x-box kinect sensor to get fairly precise distance vs time data which would help you because velocity is more strongly affected. In any case I will reconsider the experiment a little. The results as-is are still interesting when it comes to philosophical aspects of uncertainty. For example, $t/\sqrt{2h_0/g_n}$ is less than 1 for most experiments. This either means that $g$ was significantly less than $g_n$, that $h > h_0$ or that there was a bias towards stopping the stopwatch earlier. Bayesians will have no trouble putting prior distributions over the various options. Since the tape measure and gravitational constant are relatively well calibrated, I certainly suspect that the timer was biased low. This might be caused by the timer program itself (attempting to account for a delay in response it might subtract a fixed delay estimate for example), or by the experimenter pressing the start button late or the stop button early or both.

So, we'll have to see what happens with this data. perhaps it's overwhelmed by time measurement noise, or perhaps we can discover some effective radii even with the measurement noise.