Firestarter
Member
- Joined
- Aug 1, 2016
- Messages
- 5,272
So what about the “scientific” DNA evidence that identified 1594 vicsims of 9/11? If the official story is correct, then my ideas on what happened on September 11, 2001 are wrong.
According to the official story the DNA-identification process resulted in identification of 1594 of the 2749 victims. Of the 19,935 separate human remains recovered from Ground Zero, 6289 have been identified: so an average of 4 human parts per vicsim, while more than 13,500 parts couldn’t be connected to a vicsim.
I started my own investigation with the following thread: http://letsrollforums.com//9-11-dna-evidence-t31950.html
STAGES IN DNA IDENTIFICATION
I will first present 6 stages in the DNA identification, to show that science isn’t impossibly difficult.
1 – Gathering DNA-material at Ground Zero.
2 – Gathering DNA-material of vicsim/family.
3 – Preparing the DNA-samples to be measured.
4 – Analysis of DNA-samples.
5 – Identification of a victim.
6 – Reporting the results.
ADMINISTRATION PROCESS (REPORT)
Hennessey - WORLD TRADE CENTER DNA IDENTIFICATIONS: THE ADMINISTRATIVE REVIEW PROCESS (a draft version): https://www.promega.com/~/media/fil.../ishi 13/oral presentations/hennesseyrev1.pdf
The Henessy report describes the administrative process of the DNA-identification of the vicsims. It would have been easy to plant human remains at Ground zero in the chaos:
I haven’t seen any “protocol” to make sure that the found human parts were in fact coming from the demolished Twin Towers.
They estimate that 400,000 mistakes were made in the administrative process, in labelling the DNA material. They could have made errors on purpose:
They took several samples from some of the remains, because these remains were from several victims. I can understand that human remains could have fused together, but I would expect that this would only happen in a very small amount of samples (they don’t mention how often):
NEGATIVE IDENTIFICATION (HOLLAND)
If somebody is confronted with the 5 usual suspects in a line-up, in a positive identification it wouldn’t matter if the group would be extended to 10 or 25 (because the same perpetrator would be identified). In a negative identification simply the most fitting is chosen.
It looks like the DNA-identification process was indeed negative identification (by an elimination process). Simply adjusting the protocols to get to the desired result.
Holland et al - Development of a Quality, High Throughput DNA Analysis Procedure for Skeletal Samples to Assist with the Identification of Victims from the World Trade Center Attacks (2003): http://neuron.mefst.hr/docs/CMJ/issues/2003/44/3/12808717.pdf
The Holland report describes the making of STR profiles for the human remains found at Ground zero. After a sample has been analysed this results in a DNA profile which could be displayed visual, but also as numbers in a table, there are 13 Short Tandem Repeat (STR) loci.
Here’s an example of an analysis of 3 profiles. According to the analysis the first 2 profiles are from the same person. Only D16 is completely missing and there is a small difference in TH01. I agree with the conclusion here.
There were 2 phases of measuring the samples at Bode, including preparing the samples. According to Holland the phase I didn’t get enough STR-profiles:
It is clear that the protocols were adjusted to this specific situation, when scientific protocols are thrown out the window, we’re obviously not talking about science anymore:
One of the changes in phase II was that instead of 13 STR-loci, the accuracy was reduced to only 8 STR-loci; 8 loci in turn were labelled a “full” profile. You cannot go from 13 to 8 loci and then call 8 STR-loci full.
If we take the previous example, where 11 of the 13 matching STR-loci makes a match, the same kind of logic would lead to a positive identification based on only 6 of the 8 loci. This is nowhere near 99% reliability.
Amplification was used to get “better” results. Amplification is always tricky to use, because it can result in peaks of “background noise”:
They used custom-build computer programs for this specific situation. Because computer programmers aren’t DNA-experts, they wouldn’t even know that they were involved in a fraud.
They planned a follow up phase III, maybe this involves calling 5 STR-loci a “full profile”:
It looks like the phase I results are reliable. And possibly tables 3 and 4 in this paper are good; but tables 5 and 6 are unreliable for the mere fact that they are only based on 8 loci.
In phase I for the 12,849 skeletal remains for only 3,500 a full profile could be made (table 2).
This report describes that a more efficient method was used to prepare the samples in phase II using the EDTA solution. Maybe this is still “science”, but not the reduction to only 8 loci.
The report doesn’t even specify what the difference is between the tables 3 and 4; 5 and 6.
IDENTIFYING THE VICSIMS (CASH)
I’ve saved the best for last. H.D. Cash et al - DEVELOPMENT UNDER EXTREME CONDITIONS: FORENSIC BIOINFORMATICS IN THE WAKE OF THE WORLD TRADE CENTER DISASTER (2003): http://genecodesforensics.com/news/CashHoyleSutton.pdf
This report describes how custom-build software was designed so that the DNA-investigation team could make positive identifications of vicsims at will. They made changes to the software on an almost weekly basis. Of course this rapid change of functionality made it easier to achieve the desired result:
It shows that the median of human remains per vicsim was much lower than the average of 4, because for some victims 25 or even more than 100 individual samples matched the same person:
It shows that the identification was too flexible for partial profiles. The flexibility was so high that other strategies had to be used to filter out erroneously matching DNA. The result was that forensic biologists could choose at will if DNA matched:
Besides STR (which can go up to 15 loci!) also mtDNA analysis was done. The mtDNA technology is really flexible; 5% of white people have the same profile. SNP was planned and maybe even more flexible:
DISCUSSION
If you’ve read this whole post, you could conclude that I don’t provide evidence that the DNA-results were manipulated, but I only argue that the results could have been manipulated. If this is your conclusion, you’re missing the point.
Research is only “scientific” if the results can be reproduced by an independent group. The 3 reports discussed show that protocols were invented for this specific situation, while the software was adjusted on a weekly basis to get the wanted results, so the forensic biologists could choose at will which DNA-samples matched which vicsim.
The state media even claims that they were able to identify DNA-material of hijackers in the planes. The FBI apparently performed some genuine magic to provide the medical examiners’ office with DNA-material for the 10 hijackers…
According to the official story the DNA-identification process resulted in identification of 1594 of the 2749 victims. Of the 19,935 separate human remains recovered from Ground Zero, 6289 have been identified: so an average of 4 human parts per vicsim, while more than 13,500 parts couldn’t be connected to a vicsim.
I started my own investigation with the following thread: http://letsrollforums.com//9-11-dna-evidence-t31950.html
STAGES IN DNA IDENTIFICATION
I will first present 6 stages in the DNA identification, to show that science isn’t impossibly difficult.
1 – Gathering DNA-material at Ground Zero.
2 – Gathering DNA-material of vicsim/family.
3 – Preparing the DNA-samples to be measured.
4 – Analysis of DNA-samples.
5 – Identification of a victim.
6 – Reporting the results.
ADMINISTRATION PROCESS (REPORT)
Hennessey - WORLD TRADE CENTER DNA IDENTIFICATIONS: THE ADMINISTRATIVE REVIEW PROCESS (a draft version): https://www.promega.com/~/media/fil.../ishi 13/oral presentations/hennesseyrev1.pdf
The Henessy report describes the administrative process of the DNA-identification of the vicsims. It would have been easy to plant human remains at Ground zero in the chaos:
Only 150 full bodies were found and more than 20,000 body parts at Ground Zero. Most of the samples were so small that they fitted in a 50-mL conical tube, so could have been planted to be found. I don’t think it would have been easy to drop whole bodies there, but a few bone fragments or bones: I could think of 10 easy ways to do it.In the chaos following the attacks the collection of reference samples was not always documented completely. In addition, several agencies were engaged in the collection and handling of the samples, resulting in a wide variance of how data was recorded.
I haven’t seen any “protocol” to make sure that the found human parts were in fact coming from the demolished Twin Towers.
They estimate that 400,000 mistakes were made in the administrative process, in labelling the DNA material. They could have made errors on purpose:
There were 4,500 DNA collection events (amounting to over 12,000 items) in a two week period. The DNA collection cover sheet has about 40 fields of data that were routinely filled out. Again, assume 99% accuracy at each point of failure. That works out to 3 mistakes per cover sheet for a total of 13,500 errors. In point of fact, the donor is not recorded in about 250 of the collections. In another eleven collections, the name of the victim is not recorded.
In this best case scenario, there are an estimated 55,000 errors in the system. Given the unprecedented scale of the operation, the overall chaos of the immediate post 9-11 environment, and the incredible level of grief for informant and interviewer alike, it seems categorically unfair to expect 99% accuracy. If we allow for an entirely human 90% level of accuracy at each point of failure, the total errors in the system climbs to over 400,000.
They took several samples from some of the remains, because these remains were from several victims. I can understand that human remains could have fused together, but I would expect that this would only happen in a very small amount of samples (they don’t mention how often):
As of the first anniversary of the attack, the OCME had collected close to 20,000 discrete tissue samples from Ground Zero. Due to the intense pressures of the building’s collapse, tissue and bone from separate victims became fused together in some cases. It thus became necessary to take two samples for DNA testing from many of the remains, one from the bone and the other from the attached muscle. Thus the number of DNA samples will exceed the number of remains recovered.
NEGATIVE IDENTIFICATION (HOLLAND)
If somebody is confronted with the 5 usual suspects in a line-up, in a positive identification it wouldn’t matter if the group would be extended to 10 or 25 (because the same perpetrator would be identified). In a negative identification simply the most fitting is chosen.
It looks like the DNA-identification process was indeed negative identification (by an elimination process). Simply adjusting the protocols to get to the desired result.
Holland et al - Development of a Quality, High Throughput DNA Analysis Procedure for Skeletal Samples to Assist with the Identification of Victims from the World Trade Center Attacks (2003): http://neuron.mefst.hr/docs/CMJ/issues/2003/44/3/12808717.pdf
The Holland report describes the making of STR profiles for the human remains found at Ground zero. After a sample has been analysed this results in a DNA profile which could be displayed visual, but also as numbers in a table, there are 13 Short Tandem Repeat (STR) loci.
Here’s an example of an analysis of 3 profiles. According to the analysis the first 2 profiles are from the same person. Only D16 is completely missing and there is a small difference in TH01. I agree with the conclusion here.


There were 2 phases of measuring the samples at Bode, including preparing the samples. According to Holland the phase I didn’t get enough STR-profiles:
The percentage of successful results was low in relation to previous mass fatality incidents involving airline disasters. However, when this same process was applied to the analysis of skeletal remains from the American Airlines Flight 587 disaster that occurred on November 12, 2001, the success rate was in line with expected results (ie, greater than 92%of the skeletal remains produced results). This illustrated the quality aspects of the procedure and the degree of degradation that had occurred for the remains of the WTC victims.

It is clear that the protocols were adjusted to this specific situation, when scientific protocols are thrown out the window, we’re obviously not talking about science anymore:
During Phase I, an assessment was made of the success rate of obtaining useful STR profiles when employing the newly developed extraction method. In doing so, it was determined that further modifications would be necessary to increase the quality and quantity of DNA recovered from the more challenged remains. Thus, Phase II of this project involved the re-extraction of more than 5,300 of the original 13,000 bone samples.
(...)
In addition, two new STR multiplexes were developed specifically for this project, which reduced the amplicon size of the STR loci, and therefore, enhanced the ability to obtain results from the most challenged of samples.
(...)
After the controls and samples were checked and accepted, the NYSP would inform the OCME, and the STR data would be uploaded into the OCME’s master MFISys database (or other type of database) in order to make an identification either by direct agreement with a personal effect item or by performing kinship analysis.
One of the changes in phase II was that instead of 13 STR-loci, the accuracy was reduced to only 8 STR-loci; 8 loci in turn were labelled a “full” profile. You cannot go from 13 to 8 loci and then call 8 STR-loci full.
If we take the previous example, where 11 of the 13 matching STR-loci makes a match, the same kind of logic would lead to a positive identification based on only 6 of the 8 loci. This is nowhere near 99% reliability.
Amplification was used to get “better” results. Amplification is always tricky to use, because it can result in peaks of “background noise”:
32 cycles of amplification were used (details of the amplification conditions and primer sequences are currently proprietary). As expected, a dramatic increase in sensitivity and ability to obtain results was observed. During the validation process, known DNA samples containing less than 100 pg of DNA were successfully amplified.
They used custom-build computer programs for this specific situation. Because computer programmers aren’t DNA-experts, they wouldn’t even know that they were involved in a fraud.
Once profiles were obtained and analyzed, customized software developed for the WTC project by Bode was used to compare and combine the duplicate results for BodePlex 1 and BodePlex 2.
They planned a follow up phase III, maybe this involves calling 5 STR-loci a “full profile”:
Phase III will involve the analysis of individual bone samples on a case by case basis, and may include the re-extraction of bone samples using more traditional methods, and the use of mtDNA analysis or other advanced techniques, such as single nucleotide polymorphisms (SNPs).
It looks like the phase I results are reliable. And possibly tables 3 and 4 in this paper are good; but tables 5 and 6 are unreliable for the mere fact that they are only based on 8 loci.
In phase I for the 12,849 skeletal remains for only 3,500 a full profile could be made (table 2).


This report describes that a more efficient method was used to prepare the samples in phase II using the EDTA solution. Maybe this is still “science”, but not the reduction to only 8 loci.
The report doesn’t even specify what the difference is between the tables 3 and 4; 5 and 6.


IDENTIFYING THE VICSIMS (CASH)
I’ve saved the best for last. H.D. Cash et al - DEVELOPMENT UNDER EXTREME CONDITIONS: FORENSIC BIOINFORMATICS IN THE WAKE OF THE WORLD TRADE CENTER DISASTER (2003): http://genecodesforensics.com/news/CashHoyleSutton.pdf
This report describes how custom-build software was designed so that the DNA-investigation team could make positive identifications of vicsims at will. They made changes to the software on an almost weekly basis. Of course this rapid change of functionality made it easier to achieve the desired result:
Functionality had to be added to the M-FISys program [pronounced like emphasis] on a very fast schedule without sacrificing software quality and testability. In fact, a new release of M-FISys has been delivered to the OCME almost every week since mid-December, 2001 (the 38th iteration is being released at the time of this writing).
It shows that the median of human remains per vicsim was much lower than the average of 4, because for some victims 25 or even more than 100 individual samples matched the same person:
Even at this level of match stringency, the first task was to address the information glut of having 25, 50 or 100+ individual samples that had identical profiles because they were fragments of the same person.
It shows that the identification was too flexible for partial profiles. The flexibility was so high that other strategies had to be used to filter out erroneously matching DNA. The result was that forensic biologists could choose at will if DNA matched:
For samples with low partial profiles (likelihoods in the 103 – 104 range) one might expect several possible direct matches. This is particularly true if one further allows for the possibility of allelic drop out (loss of one of the two alleles in a locus because of the damaged condition of the sample – allowing an experimentally homozygous 12 to match a 12/16 heterozygous reference). M-FISys helps a forensic scientist to resolve these ambiguous matches by a process we call iterative pruning. The operator can confirm or exclude possible matches for any ambiguous sample, annotating reasons along the way. In a hypothetical example, a right hand with degraded DNA can be excluded from a potential ID on the grounds that a full profile for a right hand has already been reported. As more information is accumulated, more matches can be excluded or confirmed. mtDNA or SNP data may help to confirm or refute a potential match.
(...)
Combining STR, mtDNA and SNP data is as much a human-computer interaction issue as it is a computational one. Keep in mind that the goal is not for the software to make identifications; as a matter of law, neither the developers nor the software have such authority. Rather it is to present data to a qualified and authorized forensic biologist to make it easier for that person to certify an identification. Instead of developing an interface that combined all STR, mtDNA and SNP data in a single view, M-FISys is broken up into STR-centric, mito-centric and SNP-centric views of the data, with indications that other data supports or contradicts a proposed identification.
Besides STR (which can go up to 15 loci!) also mtDNA analysis was done. The mtDNA technology is really flexible; 5% of white people have the same profile. SNP was planned and maybe even more flexible:
The most widely used forms of DNA-based human identification involve Short Tandem Repeat [STR] analysis at 13-15 nuclear loci. A second procedure involves sequencing the hypervariable regions of the mitochondrial genome. A relatively new forensic procedure, pioneered by Orchid Biosciences, is based on Single Nucleotide Polymorphisms [SNPs] with well-characterized inheritance and frequency patterns. All of these techniques had to be combined in the M-FISys program.
(...)
mtDNA is abundant and very hardy material that can survive intact under conditions where nuclear DNA degrades. However, the variation in the hypervariable d-loop of the mitochondrial genome is not nearly as discriminating as a 13 locus STR profile. In fact, over 5% of the Caucasian population share the same, common mitotype.
DISCUSSION
If you’ve read this whole post, you could conclude that I don’t provide evidence that the DNA-results were manipulated, but I only argue that the results could have been manipulated. If this is your conclusion, you’re missing the point.
Research is only “scientific” if the results can be reproduced by an independent group. The 3 reports discussed show that protocols were invented for this specific situation, while the software was adjusted on a weekly basis to get the wanted results, so the forensic biologists could choose at will which DNA-samples matched which vicsim.
The state media even claims that they were able to identify DNA-material of hijackers in the planes. The FBI apparently performed some genuine magic to provide the medical examiners’ office with DNA-material for the 10 hijackers…
Last edited: