Reuters reports a new study published in the British Medical Journal finds the Vietnam War led to the violent deaths of 3.8 million Vietnamese between 1955 and 1984, rather than previous estimates of 2.1 million.
The new estimates relied on data from nationally representative population surveys done by thein these countries earlier this decade to calculate in wars waged from 1955 to 2002.
In most of the countries, this method pointed to much higher loss of life than broadly cited media estimates of the various war death counts had shown, the researchers said.
For example, the method indicated 3.8 million Vietnamese died in the protracted fighting in Vietnam, mostly from 1955 to 1975, compared to previous estimates cited by the researchers of 2.1 million.
To be clear (the Reuters report is fuzzy on this), the BMJ article itself is talking about violent deaths, not deaths in general. We’re talking about people killed by bombs, bullets, and bayonets, not people who died due to war-related malnutrition or illness. However, there are some confusing aspects of the study, which might feed the kinds of critiques which Megan made here and here of the Lancet and Johns Hopkins studies.
Basically, the analysis is based on UN population surveys in 2002-3 which surveyed tens of thousands of families and asked, among other things, about sibling violent deaths in war. This has a big advantage over surveys conducted in war zones, such as the Iraq deaths studies in the Lancet and at Johns Hopkins, which is that the bias issues related to conducting real-time surveys in war zones are absent. The authors take on and incorporate several possible objections to the methodology, so if you think you’ve come up with some clever debunk, check their paper first; they may have addressed it. Objections include bad memory (seems in Peru respondents accurately reported sibling deaths that took place in the Chaco War in the ’20s-’30s, over 50 years earlier); the fact that families with many war deaths are by definition underrepresented (corrected statistically); and underrepresentation of long-ago violent deaths of old people, since their siblings will have died in the meantime and be unable to report the death.
The authors do note one problem:
The fact that respondents in certain household surveys are able to localise the time of the death of their sibling as far back as six decades is remarkable, but it is important to note that the ability to do so is likely to vary by country. Little empirical information about recall bias exists in general because of the lack of an ideal standard by which to judge its degree and determinants (though research on this topic is ongoing46).
But this is a problem with assigning the date of violent death, not with assigning the cause. In any case the authors divided the deaths into ten-year chunks, so while the respondents may err by a few years, fewer were likely to have assigned the deaths to the wrong decade.
The authors then compared the figures they came up with to a system of tracking war deaths often treated as authoritative, the Uppsala University/Peace Research Institute of Oslo (Uppsala/PRIO) system, which reconciles media reports, official sources like mortuary stats, and post-conflict investigations of deaths. The authors find that their population surveys of sibling deaths find the Uppsala/PRIO figures for conflicts generally too low. For Vietnam, they come up with a total figure of 3.8 million violent deaths in three decade-long periods — 1955-64, 65-74, and 75-84 — reflecting the full length of the war, vs. the Uppsala/PRIO figure of 2.1 million.
So, here are some confusing things. First, the authors’ figures for 1965-74 are almost exactly the same as the Uppsala/PRIO figures. The big jumps are in deaths from 1955-64 and from 1975-84. Some of this may make sense: few of the deaths in South Vietnam during the Diem regime’s campaign against the Binh Xuyen warlords and the Viet Cong in the late ’50s would ever have been reported, and reporting of deaths in the early stages of the major VC insurgency in ’62-’64 might have been quite low, before US forces arrived in ’65 and brought more consistent casualty reporting. Meanwhile, in ’75 the South Vietnamese Army’s casualty reporting system collapsed as the army disintegrated under attack; all the figures on how many ARVN soldiers died in North Vietnam’s final offensive in spring ’75 have always been guesstimates. Obviously, no one killed in a Communist reeducation camp after the war ended ever had their death reported by media, and it’s possible some Southerners would report these deaths as violent deaths in wartime. Finally, it’s my anecdotal impression from talking to middle-aged Vietnamese over the years that the Vietnam-China border war of ’79 killed a heck of a lot of people, and few of those casualties ever made it into the press. And of course the same goes for the invasion of Cambodia in ’78.
But. Here’s the main issue for me. While the sample size of families surveyed in the 2002-3 population surveys across the 13 countries where war deaths were substantial is very high, the total number reporting violent deaths in wartime is not.
A total of 43 874 sibling deaths were reported in the 13 surveys, of which 917 were a result of war injuries. Some 38 613 deaths, of which 797 were due to war, occurred after 1955 and were eligible to be included in our analysis…. The survey captured a total of 290 war deaths in Vietnam, of which 155 were reported between 1965 and 1975, the period widely considered to be the most deadly phase of the war.
So we’re making these estimates based on a pretty small number of actual reported deaths; I’m not sure what the confidence levels are on the exact casualty numbers. On the bright side, the basic results jibe well with common sense: the most violent deaths in Vietnam occurred between 1965 and ’74, and the most in Ethiopia occurred between ’75 and ’84, reflecting the fall of Selassie, the Red Terror, and the wars against Somalia and Eritrea.
So what are we supposed to make of this information? I don’t really know. The general import of the study seems to me to be that it consistently found that the Uppsala/PRIO method of “passive” reported death tracking produced a result that should be multiplied by 3 to get accurate population data. I find that vaguely persuasive because I don’t think most deaths in war zones make it to the passive reporting systems. On the other hand it seems weird to me that this would hold up across different war zones with different conditions. But in terms of Vietnam, the main anecdotal thing that sits in the back of my head is that US bombing 0f the North has never been seen as a source of significant casualties compared to other aspects of the war, yet when I talk to people here, a lot of them seem to have had relatives killed in that bombing. One of the first people I met in Vietnam, the former nanny for our daughter, had a younger brother killed in the Christmas bombings in ’72, which supposedly only killed about 1500 civilians in Hanoi. Just 2 weeks ago I was talking to another guy whose house was blasted to nothing in the Christmas bombings, though no one was killed. Anyway that always seemed weird to me. Meanwhile, if the “passive reported” deaths from ’65-’74 roughly track the population-survey deaths, that might be because US forces were eager to report kills during those years; the key moral question for that period is how many violent deaths reported as enemy kills were in fact slaughter of civilians.
24 Comments so far
Leave a comment