The actual results from the Defense Manpower Data Center (“DMDC”) Sexual Assault and Gender Relations (“SAGR”) Survey for 2013-14 have finally been released. They are posted on the DMDC website and the release was announced in an email from the Academy to faculty, staff and midshipman today.
I haven’t been through the documents in any detail; but, even a quick scan allowed me to identify one error in the Academy’s summary of the SAGR study that is readily apparent now that we have the actual SAGR results. We’ll have to add to our backlog of posts an analysis of all of this new data.
Here is a direct link to the SAGR results for 2013-14.
Additionally, here are links to the 2015 Focus Group report and the 2013 Focus Group report.
This is good progress. Now let’s hope that they’ll also release the actual report for the 2011-12 SAGR survey. That way, we will be able to make an apples-to-apples comparison of the survey results from year-to-year.
Well the 2015 Focus group report was a worthless read. Everything seemed to be prefaced “Participants disagreed” on every subject, or general and vague statistics. There were no actual percentages as to how many agreed or not on a single question. “Themes
provided in this report are qualitative in nature and cannot be generalized to the full population of USMMA students, faculty and staff. Themes should be considered as the attitudes and opinions of focus group participants only and not the opinions of all USMMA students, faculty and staff” It seems media and administration blew right past this statement when coming to their conclusions?
It is worth noting that the 2015 focus group report is marked “For Official Use Only” and they may have violated rules/regulations when releasing it.
While you may be technically correct, I think it is a case of “over classification.” It should never have been designated FOUO to begin with. The reports for the other academies are routinely released.
From Page V-VI: “Adjustments for nonresponse—Although 2014 SAGR was a census of all students, some students did not respond to the survey, and others responded or started the survey but did not complete it, (i.e., did not provide the minimum number of responses required for the survey to be considered complete). RSSC adjusts for this nonresponse in creating population estimates by first calculating the base weights as the reciprocal of the probability of selection (in 2014 SAGR the base weights take on the value one (1) since the survey was a census). Next RSSC adjusts the base weights for those who did not respond to the survey, then adjusts for those who started the survey but did not complete it.”
From what little I know about Stats (courses in Undergrad and Graduate school), this adjustment falls under the “lies, damn lies, and statistics” category. To take the rate for the respondents for “Unwanted Sexual Contact” – 4.2% from the respondents, and apply it to all students is incorrect (page VIII). Their “constructed CI” would have to based on the simple premise that the 537 respondents (sample size) are representative of the 936 total students attempted to be surveyed (general population). The 537 was not a pre-determined sample size, it was who responded to a blanket census, the 936 were the attempted sample/census, and only 537 chose to participate, and there is zero way to measure the actual distribution of the sample to ensure that it is representative of the 936 census as a whole. Zero. Their statistical modeling is false. Therefore their conclusions when applied to all 936 of the students are false.
You raise something that has always bothered me about these surveys. I come at it from a completely non-technical point of view. SA/SH is believed to be widely under reported, but, in this situation, you have 936 students who were offered an opportunity to anonymously respond and report incidents of SA/SH; and, you have a far higher participation rate by female midshipmen compared to men. It seems to me that the odds are that those women who did not respond to the survey are statistically far less likely to have experienced SA/SH than those who did respond — especially when it comes to the more severe forms of SA/SH. I do not discount the possibility that there may well be one or more students for whom the effects of the incident are too great to allow a response; but, my intuition tells me that the rate of victim participation/compared to non-victim participation in the survey is higher than the rate of victims/non-victims who did not participate. Or maybe a better way to put it: Isn’t it most likely that non-victims blew off the survey (because they don’t consider it a high priority) at a higher rate than victims elected not to participate because of emotional toll of responding was too great? I don’t know the answer, so I’m just throwing it out there for consideration.
We cannot extrapolate either way. Helis has a PhD. If he had Stat class in that pursuit he should know better than to accept the studies methodology in applying the responses of the ~60% to all students. Points to either his playing politics or a personal agenda. Either way is a dereliction of duty to his responsibility to the midshipmen.