Appendix S1: Expanded laboratory methods and Bioinformatics workflow

Laboratory analysis

In the lab, fecal samples were removed from vials, separated from silica gel beads, and weighed to the nearest mg before extraction. We extracted DNA from samples using the Ultraclean Soil DNA Isolation Kit (MoBio Laboratories) following the manufacturer’s alternative protocol for maximum yield, with incubation times ranging from at least 30 minutes to no more than 24 hours. We modified the protocol slightly to accommodate overnight incubation at 4°C (McCracken et al. 2012). Previous studies of the diet of T. brasiliensis showed that at least 5 pellets were necessary to give a reasonable estimate of diet breadth using morphological examination of fecal pellets (Whitaker et al. 1996). For this study, we combined from 1 to approximately 25 pellets from an individual bat, up to 0.05 g, in each extraction. To control for contamination, we included a negative control, including only extraction reagents, with each batch of extractions. Vials containing 50 μl extracted DNA in elution buffer were stored at -20°C.

We amplified the samples by polymerase chain reaction (PCR) with the primers ZBJ-ArtF1c and ZBJ-ArtR2c (Zeale et al. 2011) modified for the Ion Torrent platform (Life Technologies). The primers consisted of a unique 10 base DNA sequence from the Ion Xpress Barcode list (multiplex identifier, or “MID”) used to separate sequences for analysis, followed by a 3 base GAT adapter and either the forward or reverse primer sequence. Each individual sample had a unique combination of forward and reverse MID so that up to 96 samples could be pooled, yet still allow the results to be computationally separated.

The PCR conditions were 25 ul reactions of 1X PCR gold buffer, 2.5 mM MgCl2, 0.8 mMdNTP blend, 0.125 U AmpliTaq Gold (Applied Biosystems), 5 mg BSA (Sigma), 5 uM each primer (Integrated DNA Technologies), and 3 ul of fecal DNA. PCR cycling parameters were denaturation at 95° C for 10 min, followed by 40 cycles of 95 C for 30 sec, 52° C for 30 sec, and 72° C for 30 sec, followed by a final elongation step of 10 min at 72° C. A PCR blank on water in place of DNA template was included in every PCR. Amplification success was confirmed by running 5 ul of each sample on a 2% agarose gel (Sigma). PCR products were quantified on a fluorometer and samples with similar concentrations were pooled together. The pooled products were cleaned of unincorporated nucleotides with AgencourtAMPure XP beads (Beckman Coultour). The purified products were then prepared with the Ion Plus Fragment Library Kit (Life Technologies). After the adapter-ligation and nick-translation steps, the samples were size selected on a Pippin Prep (Sage Science) to collect the product around 300 bp. The size-selected product was purified again with AMPure beads before continuing with the Ion Plus Fragment Library kit to amplify and repurify the library. Final concentrations were quantified on a Bioanalyzer (Agilent Technologies) and pooled to similar molarities, then loaded on a 318 chip for the Ion Torrent PGM (University of Tennessee Genomics Core) and processed using Torrent Suite v 3.6.2. Samples were processed in three runs. The first contained 50 fecal collection dates, the second 44 different dates, and the third all dates, each run containing a blank.

McCracken GF, Westbrook JK, Brown VA, Eldridge M, Federico P, Kunz TH (2012) Bats track and exploit changes in insect pest populations. PLoS One 7 (8). doi:10.1371/journal.pone.0043839

Whitaker, J.O., Neefus, C. & Kunz, T.H. (1996) Dietary variation in the Mexican free-tailed bat (Tadarida brasiliensis mexicana). Journal of Mammalogy,77, 716-724.

Zeale, M.R.K., Butlin, R.K., Barker, G.L.A., Lees, D.C. & Jones, G. (2011) Taxon-specific PCR for DNA barcoding arthropod prey in bat faeces. Molecular Ecology Resources,11, 236-244.

Bioinformatic workflow overview

Source code for bioinformatics workflow scripts is available in Appendix S2. These scripts were run in the following environment:

Bioperl version 1.006924 bioperl.org

USearch version v7.0.1090_i86osx32

UParse-related python scripts

The workflow performs quality screening and then runs theUParse pipeline. It incorporates output data from three separate Ion Torrent runs and produces a single fasta file containing clustered OTU sequences,and a spreadsheet mapping those sequences back to original samples.

The workflow includes calls to ErrorScreen.pl and its accompanying file of subroutines, MIDSubs.pl, both provided in Appendix S2. This purpose-made script scans Ion Torrent output fastqfiles that have been previously scrubbed of low-quality sequences. It assigns sequences to sample dates using MIDS (multiplex identifiers) and removes sequences that include:

  • Missing MIDS or invalid combinations of MIDs
  • Missing or invalid primers
  • Sequences (excluding MIDs and primers) < 150bp

Primer sequences, MID sequences, and sample identifiers are encoded in MIDSubs.pl, which tolerates up to 10% mismatch in MIDs and primers.

The overall workflow is as follows:

#Initial screening of Ion Torrent ouput files using maxee 1.0

usearch -fastq_filter R_2013_08_23_14_58_58_user_PG1-61.fastq -fastq_minlen 150 \

-fastq_maxee 1.0 -fastqout run1.fastq

usearch -fastq_filter R_2013_12_20_14_55_00_user_PG1-62.fastq -fastq_minlen 150 \

-fastq_maxee 1.0 -fastqout run2.fastq

system ("usearch -fastq_filter R_2014_02_25_14_39_38_user_PG1-63.fastq -fastq_minlen 150 \

-fastq_maxee 1.0 -fastqout run3.fastq

# now check for MIDs

# Run is a number from 1-3, for the sequencing run to be processed.

# maxee is the level of quality screening done prior to the MID screening

perl ErrorScreen7.pl 1 1

perl ErrorScreen7.pl 2 1

perl ErrorScreen7.pl 3 1

#Combine the valid fasta records from each run into a single file

Cat screened1.fasta screened2.fasta screened3.fasta > combo.fasta

# Commands below are part of UPARSE pipeline

# Remove duplicate records

usearch -derep_fulllengthcombo.fasta -fastaoutderep.fasta -sizeout

# Sort records in decreasing abundance

usearch -sortbysizederep.fasta -fastaout derep2.fasta -minsize 2

# Cluster similar records into OTUS

usearch -cluster_otus derep2.fasta -otusotus.fasta

# Assign numbers to OTUs

python fasta_number.py otuscomboplus.fasta OTU_ > otusncomboplus.fasta

# Map reads (including singletons) back to OTUs

usearch -usearch_globalcomboplus.fasta -dbotusncomboplus.fasta\

-strand plus -id 0.97 -ucreadmapcp.uc

# Find the number of unresolved reads (singletons & chimeras)

grep -c \\\"^N\\\" readmapcp.uc

# Create OTU table

python uc2otutab.py readmapcp.uc > otu_tablecombonplus.txt