A Wear Evaluation of Ultra High Molecular Weight Polyethylene (UHMWPE) against Nanostructured Diamond-Coated Ti-6Al-4V Alloy

doi:10.22186/jyi.31.3.21-26

Abstract | Introduction | Methods | Results | Discussion | Conclusions |Acknowledgements | 
References |PDF

Abstract

Reducing the wear of joint replacements would increase the lifespans of both the replacement and the patient. In this study, the wear of ultra-high molecular weight polyethylene (UHMWPE) against nanostructured diamond (NSD)-coated titanium alloy (Ti-6Al-4V) and uncoated cobalt-chromium alloy (CoCr) hemi-cylinders was compared to determine if the NSD coating results in less volumetric wear of polyethylene (PE). A wear simulator was used with a gait cycle to mimic the knee joint, but with an axial force ranging from 30N to 700N during the cycle. Both tests ran for 1.5 million cycles while immersed in bovine serum. The roughness and volumetric wear of the NSD-coated alloy were greater than the non-coated control sample. No significant differences in the PE surface structure hardness were observed for either wear couple, as measured using Raman spectroscopy, X-ray diffraction, and nanoindentation. Although the roughness of the PE (worn by NSD-coated alloy) decreased faster than that of the control PE surface (worn by CoCr), the as-deposited surface roughness of the NSD coating was about three to four times higher than that of the starting CoCr surface. These results suggest that a much higher abrasive wear occurred for the NSD-PE couple due to the high NSD surface roughness, which also accounted for its inferior wear performance. Due to the higher initial surface roughness of the NSD-coated hemi-cylinder, the wear of the NSD-PE couple is greater than the non-coated couple, indicating that this combination would not increase the lifespan of a replacement joint.

Introduction

The number of total knee replacements in the US has doubled in the last ten years (Salomon et al., 2010; Weinstein et al., 2013). In addition, the average age of patients receiving a total knee replacement (TKR) is decreasing (Weinstein et al., 2013). It is becoming increasingly important to determine how to reduce wear and degradation of implanted joints in order to improve their long-term performance (Smith, Dieppe, Porter, & Blom, 2012). Most TKRs contain an ultra-high molecular weight polyethylene (UHMWPE) tibial component that articulates against a metallic alloy femoral component (Ritter, 2009). These pieces are often attached to the bone with bone cement but this method of fixation has the potential to create inflammatory debris and wear-induced periprosthetic osteolysis (Ritter, 2009).

It is essential that TKRs be made to last longer than they do now, especially given the fact that the average human lifespan is projected to increase significantly over the next 100 years (Weinstein et al., 2013). In a 15-year survivorship study (Ranawat et al., 1993), only 70.6% of patients who weighed more than 80kg survived. If implant wear can be decreased, the life expectancy of people weighing over 80kg could potentially increase. Assuming humans continue getting TKRs around the age of 65 (Daniilidis & Tibesku, 2012), the knee replacement must last at least 20 years, ideally 30. To enhance the lifespan of the knee replacement, the current focus has been on improving the UHMWPE spacer, which simulates the articular cartilage that allows for smooth movement of the femur and the tibia. It has been shown that polyethylene (PE) wear particles (Amstutz, Campbell, Kossovsky, & Clarke, 1992; Ritter, 2009; Teeter, Parikh, Taylor, Sprague, & Naudie, 2007) generated from the PE spacer causes osteolysis and possible loosening of the tibial plate (Willert, Bertram, & Buchhorn, 1990). The purpose of this study is to reduce this wear.

Third-body wear (the introduction of hard particles in the space between two articulating members), has been shown to increase the roughness of the cobalt-chromium (CoCr) femoral component and PE wear (Davidson, 1993; Lawson, Catledge, & Vohra, 2005; Pierannunzii, Fischer, & D’Imporzano, 2008; Wang & Essner, 2001). A study done by Wang and Essner (2001) showed that loose poly-methyl-methacrylate (PMMA) bone cement particles in the lubricant, in excess of 5g/L, adhere to CoCr femoral heads and lead to accelerated wear of the acetabular cups. In contrast, the attachment of PMMA particles to ceramic heads was much reduced, resulting in an UHMWPE wear rate that was independent of the concentration of the PMMA particles.

To minimize third-body and other mechanisms of wear from shortening the lifespan of a knee replacement, a nanostructured diamond (NSD) or amorphous carbon coating on the femoral component can be used (Amaral et al., 2007). The tribological benefits may be attributed to: (1) superior lubricating properties (more wettable, hence better able to maintain lubricant on the surface), (2) high hardness and (3) relative inertness of the material. These characteristics can provide a decrease in the coefficient of friction at the bearing surface, with less susceptibility to third-body wear and scratching, as well as less biological response to any debris generated by ceramic wear particles (Lawson et al., 2005; Zietz, Bergschmidt, Lange, Mittelmeier, & Bader, 2013). Recently, it has been shown that the lubricity of amorphous carbon arises from shear induced strain localization, which dominates the shearing process (Pierannunzii et al., 2008). This lubricity is characterized by covalent bond reorientation, phase transformation and structural ordering in a localized tribolayer region. A transition in movement from stick-slip friction to continuous slipping, with ultra-low friction, is observed due to gradual clustering and layering of graphitic sheets in the tribolayer. This enhanced lubricity potentially offered by a variety of carbon-containing coatings may reduce wear in total joint prostheses.

In a previous pin-on-disk study (Hill et al., 2008), we showed that a NSD-coated Ti-6Al-4V disk surface resulted in a factor of two less wear of the PE pin when compared to a non-coated CoCr disk. However, this study was not representative of the gait cycle experienced by the knee joint. In the present study, we compare the effects of wear of an NSD-coated titanium alloy hemi-cylinder to a non-coated CoCr control using a multi-axis wear simulator to more closely mimic the knee joint. We hypothesize that the NSD-coated hemi-cylinder will cause less wear compared with the non-coated hemi-cylinder due to more favorable lubricity and reduced friction conditions.

Methods

For wear-testing, an AMTI Force 5 machine (Advanced Mechanical Technology, Inc. Watertown, MA) was used by applying a cyclic vertical load ranging from 30N to 700N on the flat PE samples shown in Figure 1. The tests followed ISO (International Organization for Standardization) standard 14243-3 (ISO, 2014). However, the axial force was scaled down during the gait cycle from a maximum of 2600N (as stated in the standard) to 700N, due to limitations of our load cell capacity. Flexion movement (rotation of flexion arm around a horizontal axis), anterior and posterior movement, and rotation of the stage were incorporated into the cyclic waveform representative of the knee gait cycle, according to the ISO standard. Approximately 1.5 million cycles at 1Hz were performed on each PE sample using the uncoated CoCr and the NSD-coated hemi-cylinder. For the uncoated upper piece, the CoCr hemi-cylinder was attached to a rotating flexion arm using a thin layer of bone cement. For the NSD-coated piece, a titanium hemi-cylinder of the same dimensions was used as the substrate for coating. This hemi-cylinder was attached to the flexion arm using two stainless steel bolts.

Wear-Evaluation-Figure  (1)

Figure 1. Experimental Setup. A) UHMWPE sample with wear from CoCr counterface, B) CoCr counterface, C) UHMWPE sample with wear from diamond coated counterface, D) diamond coated counterface, E) in situ experiment in bovine serum.

To determine the effect of the wear testing, four PE samples were measured: two controls (soaked for an equivalent time in bovine serum but not wear-tested) and two wear-test samples. The wear-test samples were imaged using atomic force microscopy (AFM) and weighed every 250 thousand cycles. All samples were kept completely submerged in a bovine serum mixture at 37°C for the duration of the 1.5 million cycles. The serum mixture (consisting of 0.2%w/v of Sodium Azide (0.8 g), 20mM of EDTA (2.98mL), 100mL of bovine serum, and 300mL of deionized water) was replaced every 750 thousand cycles. To counteract evaporation, 100mL of deionized water was added every day to the machine reservoir. The control samples were kept in a sealed container, without adjusting volume.

Weighing Samples

To obtain the weight of the wear-test samples, a cleaning procedure was followed based on ASTM (American Society for Testing and Materials) F732 “Standard Test Method for Wear Testing of Polymeric Materials Used in Total Joint Prostheses” (ASTM 2011). The samples were retrieved from the machine and rinsed with water. Extraneous particles were removed with lens paper. Next, samples were sonicated for 15 minutes in 100mL of tap water and 1mL of liquid cleaner. After rinsing in deionized water, sonication procedure was repeated for an additional five minutes in 100mL of deionized water. Following sonication in deionized water, the samples were soaked in methanol bath for three minutes. Next, the samples were placed in a desiccator for 30 minutes. Finally, samples were weighed and imaged to determine changes in mass and surface roughness.

Atomic Force Microscopy (AFM)

AFM imaging was done to determine how the surface roughness of the PE samples was changing over the 1.5 million cycles. Imaging was performed using close-contact mode at a scan rate of 0.45Hz with 256 points per line. Four AFM images were taken in various regions of the PE sample with scan areas of 10um2 and 30um2. Surface roughness values were obtained using Scanning Probe Image Processor (SPIP) 5.1.1 (Image Metrology A/S, Hørsholm, Denmark) and MS Excel. Optical microscope images of the wear samples were also taken at the start and end of the 1.5 million cycles. The wear-test samples and control samples were weighed at the same intervals to account for mass gain from serum absorption. The overall change in roughness of the PE samples was recorded. AFM was also done on the surface of the hemi-cylinders before and after wear.

Volumetric Wear

In order to calculate volumetric wear, the mass of the wear-test sample was corrected by subtracting the mass absorption of the control sample from that of the wear-test sample measured at the same interval. The initial mass measurement (M0) was subtracted from each consequent measurement (Mm) to acquire the mass difference (Mc). The change observed, true mass (Mt), of the control sample was then subtracted from the change in the mass of wear-test sample. The wear volume (mm3) was calculated from Mt using the density of UHMWPE (0.9363kg/cm3). 

Nanoindentation

Nanoindentation was performed on the PE samples using a Berkovich diamond tip (nominal radius 50nm) to a depth of one µm. Indentation was done before and after wear testing to detect any changes in surface hardness or elastic modulus, potentially as a result of structural transformations. 

X-ray Diffraction

X-ray diffraction (λ = 1.54154Å) was used on the PE sample surfaces to further examine potential effects of wear, such as phase transformations or texturing, from articulation against the NSD-coated and non-coated alloy.

Raman Spectroscopy

Laser Raman spectroscopy (λ = 514.5nm) was performed on the control and wear-test samples to evaluate possible structural transformation (such as disordering of carbon bonds), as measured in the wavenumber range from 1000 to 1800cm-1

Results

In this study, we evaluated two samples: a control UHMWPE sample worn against CoCr (Sample 1) and a test UHMWPE sample worn against NSD-coated Ti-6Al-4V alloy (Sample 2). Nanoindentation performed on both samples before and after wear did not show a significant change in either hardness or Young’s modulus. Compared to the UHMWPE/CoCr couple, the NSD/UHMWPE couple produced higher volumetric wear. The volumetric wear of Sample one was 1.20mm3 after 250k cycles and reached a value of 3.72mm3 at the end of the 1.5 million cycles (Fig. 2). By comparison, the wear of Sample 2 after 250k cycles was 6.29mm3, which was already almost twice as much as the maximum wear volume of Sample 1. Note that the volumetric wear trends of these two samples are different. Sample 1’s volumes tapered off in the final measurement intervals whereas Sample 2 showed a consistent amount of wear occurring at each interval. Using a parabolic curve fit, the best fit gives an R2 value of 0.8476 for Sample 1. The linear fit of Sample 2 has an R2 value of 0.9945. Overall, the volumetric wear of Sample 2 is higher and more consistent than the wear of Sample 1.

Wear-Evaluation-Figure  (2)

Figure 2. Volumetric Wear of Samples 1 and 2. Sample 1 (Worn by CoCr): The starting wear is around 1.2mm3 while the ending wear is 3.7mm3. The R2 value of a parabolic line of fit is 0.8476. The rate of wear decreases as the number of cycles increases. Sample 2 (worn by Ti alloy): The starting wear is around 6.3mm3 while the ending wear is 18mm3. The R2 value of a linear line of fit is 0.9991. The rate of wear seems to stay consistent as the number of cycles increases.

The initial average roughness value of Sample 1 (PE worn by CoCr) was lower than the initial average roughness of Sample 2 (PE worn by the NSD-coated alloy) (Fig. 3). Sample 1’s initial roughness was 177.21 ± 42.23nm while Sample 2 had the initial roughness of 260.64 ± 69.74nm for an AFM scan area of 30µm(Fig. 3). The final roughness of Samples 1 and 2 were 27.32 ± 5.67nm and 11.13 ± 3.42nm, respectively. The overall roughness of Sample 1 decreased by 85% while the overall roughness of Sample 2 decreased by 96%. For an AFM scan area of 10µm2, Sample 1 had a decrease of 80% in overall roughness, which is 8% lower than that of Sample 2. The roughness of both PE samples dropped by more than 90% within the first 500 thousand cycles (as measured from an AFM scan area of 30µm2). The outlier at approximately 1 million cycles for the roughness of Sample 1 had an average value of 128.16nm with a relatively large standard deviation obtained from four separate measurements, the least of which was 51nm.

Figure 3. Average Roughness of PE Sample 1 and 2. The starting roughness for both samples at areas of 10µm2 and 30µm2 is higher than the final roughness. The Sample 1 roughness value on approximately 750 thousand cycles is larger than expected. However, the error is very large as well. Four Samples per data point yields errors bars that are less than 1 standard deviation (SD). Sample 1: Data from 250k (10µm2), 500k (30µm2), and 750k (30µm2) cycles had a roughness range greater than 1 SD but less than 2 SDs. Sample 2: Data from 250k (30µm2) cycles had a roughness range greater than 1 SD.

No measurably significant change in surface roughness was detected from either the uncoated CoCr or NSD-coated hemi-cylinders before and after wear (Fig. 4). However, it should be noted that all roughness values for the CoCr hemi-cylinder are below 8nm, while those of the NSD-coated alloy are about a factor of three larger. Finally, Raman spectroscopy revealed that no significant change in PE carbon bonding occurred after wear (Fig.5).

Wear-Evaluation-Figure  (4)

Figure 4. Average Roughness of CoCr and Titanium Alloy Hemi-cylinders. Both the non-coated CoCr and diamond-coated Titanium alloy generally show that, as the area being imaged increases, so does the average roughness. Both “before wear” and “after wear” show this same trend. For the non-coated CoCr hemi-cylinder, the “before wear” error bars (10um2 and 80um2) ranged greater than 1 SD.

 

Wear-Evaluation-Figure  (5)

Figure 5. Raman Spectroscopy of PE Samples 1 and 2. Raman Spectroscopy peaks remained unchanged for both PE samples.

Discussion

With the number of people receiving total knee replacements growing, as well as an overall increase in lifespan, the need for a longer-lasting total knee replacement is becoming more urgent. Out results indicated that the wear effect of an NSD-coated titanium hemi-cylinder on a PE sample was greater than the wear from a non-coated CoCr on a PE sample. The UHMWPE/NSD resulted in a smoother surface on the PE sample.

The volumetric wear and AFM roughness data both showed that more wear occurred for the UHMWPE/NSD couple compared to the control UHMWPE/CoCr couple. Given that the starting surface roughness of the NSD coating was three to four times higher than that of the CoCr surface, it suggested that excessive abrasive wear occurred for the UHMWPE/NSD wear couple. Our result showed the opposite result from a previous pin-on-disk study (Hill et al., 2008), which found that that less wear occurred for the UHMWPE/NSD couple compared to an UHMWPE/CoCr couple. In that study, the NSD coating was smoother by a factor of five and the loads/displacements were much less clinically relevant. In order to test the potential benefits of an NSD-coated counterface against UHMWPE for a total joint replacement, future efforts will need to be directed toward decreasing the NSD surface roughness to a value at least as small as the starting bare alloy. Otherwise, abrasive wear may dominate the wear mechanisms.

Figure 2 shows that, if extrapolated, the curved trend seen in the volumetric wear of Sample 1 can be expected to continue increasing at a slower rate. However, when observing the volumetric wear of Sample 2, no such trend is seen. Instead, a nearly linear trend is observed along with a high R2 value of 0.9991. If this trend were continued, it would most likely increase linearly, as it did for the past 1.5 million cycles.

Based on the results of this study, the NSD-coated alloy, with its relatively high surface roughness compared to the CoCr alloy, would cause unacceptable wear of the PE inserted in an artificial joint, leading to a high probability of early implant failure. As shown in Figure 2, the volumetric wear of the control PE (Sample 1) appears to drop off near the end of the 1.5 million cycles. This trend is not seen with the test PE (Sample 2). Instead, the wear rate is consistent for the entire 1.5 million cycles. The average roughness for PE Sample 2 (Figure 3) shows that the roughness rapidly decreases within the first 500 thousand cycles and remains nearly unchanged after that. Therefore, we cannot assume that the mass loss is proportional to a decrease in surface roughness. Instead, it is more likely that abrasive wear caused by the NSD surface initially removed asperities from the UHMWPE (whose initial roughness was several hundred nanometers) and then continued to wear this surface at a constant rate. Since the NSD surface is a factor of three to four rougher than the CoCr, one would expect abrasive wear to be higher for this wear couple. For the UHMWPE/CoCr couple, the apparent plateau in wear may be explained by improved lubricity at the metal/polymer interface as the polymer surface roughness drops. Though the roughness changed greatly throughout the wear testing, both Raman spectroscopy and x-ray diffraction (XRD) showed no significant changes between the control and test PE samples.

An outlier in measured surface roughness from Sample 1 was observed at the 4th measurement interval (near 1 million cycles). At this point in the wear-test, the roughness is not uniform throughout the contact surface areas. One of these less-smooth areas could have been chosen for collecting these data rather than the typical smoother locations used in the other experiments. The four average roughness values obtained for this measurement had a very large range. It is not clear why these measurements resulted in such large spread. Since the surfaces of the alloys were not of the same roughness at the beginning, this could have also affected how much polyethylene was worn during the 1.5 million cycles. To maintain consistency, both alloy surfaces should be either polished or chosen to have the same roughness. Given the limitations of this study, the overall wear of polyethylene from the non-coated alloy was nearly a factor of five less than that of the NSD-coated alloy.

The NSD-coated hemi-cylinder resulted in nearly five times more PE wear than that from the non-coated CoCr hemi-cylinder while the average PE roughness decreased much faster for Sample 2 (involving the NSD counterface) than for Sample 1 (involving the non-coated CoCr counterface). This suggests a more aggressive abrasive wear-in period in the beginning due to the higher surface roughness of NSD. To determine how significant the wear rate is, more PE samples should be tested. This will help ensure reproducibility as well as reduce random errors.

The surface of the NSD-coated hemi-cylinder itself showed no appreciable changes in average roughness before and after wear. For both control and test PE samples, no change in mechanical properties (hardness or elastic modulus as measured by nanoindentation) or in carbon bond structure (as measured by Raman spectroscopy) were detected. Both XRD and nanoindentation should be done on the sample worn by the NSD-coated hemi-cylinder in the future. These techniques could help detect changes in the PE crystallinity as well as possible alignment of the PE fibers. Although this data suggests that an UHMWPE/NSD couple is inferior, the influence of starting surface roughness must be considered as a limiting factor. All counterface surfaces should have similar and low initial roughness values to ensure reproducibility. This could be achieved through polishing. We hope to achieve a similar surface roughness for both counterfaces by improving the NSD coating process. The NSD-coating needs to be have reduced roughness and improved consistency over relatively large surface areas. In the future, reduction of NSD coating surface roughness should be a primary goal for a better comparison to the control wear couple.

Although this study evaluated an UHMWPE/NSD couple, an alternative couple that could lead to more promising results would involve NSD/NSD. In this way, the NSD-coating may be expected to minimize abrasion while acting as a barrier to prevent leaching of potentially toxic heavy metal ions from the substrate. The applications for this coating are more suitable for artificial hip joints (which also employ hard-on-hard bearings). Perhaps, for a ceramic-on-ceramic hip replacement, an NSD coating could even further increase the longevity of implants (Bhatt & Goswami, 2008; Hill et al., 2008; Ranawat et al., 1993).

Acknowledgements

The authors greatly appreciate the support provided by the UAB Research Experiences for Undergraduates (REU) program funded by the National Science Foundation Grant DMR 1460392.

References 

Amaral, M., A. Dias, P. Gomes, M. Lopes, R. Silva, J. Santos, & M. Fernandes. (2007). Nanocrystalline Diamond: In Vitro Biocompatibility Assessment by MG63 and Human Bone Marrow Cells Cultures. Journal of Biomedical Materials Research, 87A(1), 91-99. http://dx.doi.org/10.1002/jbm.a.31742

Amstutz, H., Campbell, P., Kossovsky, N., & Clarke, I. (1992). Mechanism and Clinical Significance of Wear Debris-Induced Osteolysis. Clinical Orthopaedics and Related Research. 276, 7-18. http://journals.lww.com/corr/Abstract/1992/03000/ Mechanism_and_ Clinical_Significance_of_Wear.3.aspx

ASTM F732-00. Standard test method for wear testing of polymeric materials used in total joint prostheses. (2011). ASTM International, West Conshohocken, PA. www.astm.org

Bhatt, H., & Goswami, T. Implant wear mechanisms—basic approach. (2008). Biomedical Materials, 3(4), 109-109. http://dx.doi.org/10.1088/1748-6041/3/4/042001

Daniilidis, K., & Tibesku, C. O. (2012). Frontal plane alignment after total knee arthroplasty using patient-specific instruments. International Orthopaedics, 37(1), 45-50. http://dx.doi.org/10.1007/s00264-012-1732-1

Davidson, J. (1993). Characteristics of Metal and Ceramic Total Hip Bearing Surfaces and Their Effect on Long-Term Ultra High Molecular Weight Polyethylene Wear. Clinical Orthopaedics and Related Research294, 361-378. Retrieved from http://journals.lww.com/corr/Abstract/1993/09000/Characteristics_of_Metal_and_Ceramic_ Total_Hip.53.aspx

Hill, M. R., Catledge, S. A., Konovalov, V., Clem, W. C., Chowdhury, S. A., Etheridge, B. S., Stanishevsky, A., Lemons, J. E., Vohra, Y. K. and Eberhardt, A. W. (2008). Preliminary tribological evaluation of nanostructured diamond coatings against ultra-high molecular weight polyethylene. Journal of Biomedical Materials Research 85B: 140–148. http://dx.doi.org/10.1002/jbm.b.30926

ISO 14243-3. Implants for surgery – wear of total knee-joint prostheses. (2014). International Organization for Standardization (ISO). www.iso.org

Lawson, T., Catledge, S., & Vohra, Y. (2005). Nanostructured Diamond Coated CoCrMo Alloys for Use in Biomedical Implants. Key Engineering Materials (KEM), 284-286, 1015-1018. http://dx.doi.org/10.4028/www.scientific.net/KEM.284-286.1015

Pierannunzii, L., Fischer, F., & D’Imporzano, M. (2008). Retroacetabular osteolytic lesions behind well-fixed prosthetic cups: Pilot study of bearings-retaining surgery. J Orthopaed Traumatol Journal of Orthopaedics and Traumatology, 9(4), 225-231. http://dx.doi.org/10.1007/s10195-008-0031-5

Ranawat, C. S., Flynn, W. F., Saddler, S., Hansraj, K. K., & Maynard, M. J. (1993). Long-term results of the total condylar knee arthroplasty. Clinical Orthopaedics and Related Research286(286), 94-102. http://dx.doi.org/10.1097/00003086-199301000-00015

Ritter, M. A. (2009). The anatomical graduated component total knee replacement: a long-term evaluation with 20-year survival analysis. The Bone & Joint Journal (British)91-B(6), 745-749. http://dx.doi.org/10.1302/0301-620X.91B6.21854

Salomon, J. A., Wang, H., Freeman, M. K., Vos, T., Flaxman, A. D., Lopez, A. D., & Murray, C. J1. (2010). Healthy life expectancy for 187 countries, 1990–2010: a systematic analysis for the global burden disease study. The Lancet380(9859), 2144-2162. http://dx.doi.org/10.1016/S0140-6736(12)61690-0

Smith, A. J., Dieppe, P., Porter, M., & Blom, A. W. (2012). Risk of cancer in first seven years after metal-on-metal hip replacement compared with other bearings and general population: linkage study between the National Joint Registry of England and Wales and hospital episode statistics. British Medical Journal, 344(e2383). http://dx.doi.org/10.1136/bmj.e2383

Teeter, M., Parikh, A., Taylor, M., Sprague, J., & Naudie, D. (2007). Wear and Creep Behavior of Total Knee Implants Undergoing Wear Testing. The Journal of Arthroplasty, 15(1), 53-64. http://dx.doi.org/10.1016/j.arth.2014.08.001

Wang, A., & Essner, A. (2001). Three-body wear of UHMWPE acetabular cups by PMMA particles against CoCr, alumina and zirconia heads in a hip joint simulator. Wear250(1-12), 212-216. http://dx.doi.org/10.1016/S0043-1648(01)00643-3

Weinstein, A. M., Rome, B. N., Reichmann, W. M., Collins, J. E., Burbine, S. A., Thornhill, T. S., Wright, J., Katz, J. N., & Losina, E. (2013). Estimating the burden of total knee replacement in the United States. The Journal of Bone and Joint Surgery (American)95(5), 385-392. http://dx.doi.org/10.2106/JBJS.L.00206

Willert, H., Bertram, H., & Buchhorn, H. G. (1990). Osteolysis in Alloarthroplasty of the Hip. Clinical Orthopaedics and Related Research258, 96-107. http://journals.lww.com/corr/Abstract/1990/09000/Osteolysis_in_Alloarthroplasty_of_the_ Hip__The.14.aspx

Zietz, C., Bergschmidt, P., Lange, R., Mittelmeier, W., & Bader, R. (2013). Third-body Abrasive Wear of Tibial Polyethylene Inserts Combined with Metallic and Ceramic Femoral Components in a Knee Simulator Study. The International Journal of Artificial Organs IJAO, 36(1), 47-55. http://dx.doi.org/10.5301/ijao.5000189

Access Full PDF: A Wear Evaluation of Ultra High Molecular Weight Polyethylene (UHMWPE) against Nanostructured Diamond-Coated Ti-6Al-4V Alloy

AIRE Deficiency Exposes Inefficiencies of Peripheral Tolerance Leading to Variable APECED Phenotypes

doi:10.22186/jyi.31.3.15-20

Abstract | Introduction | Methods | Results | Discussion | Conclusions |Acknowledgements | 
References |PDF

Abstract

Autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) is a rare, recessive disease caused by mutations in the autoimmune regulator (AIRE) gene. A loss of function at the AIRE locus is widely known to induce autoimmune activation against host tissues due to lack of central tolerance during thymic T cell development. Failure to delete autoreactive T cell clones allows their release into the periphery, where they may proliferate and initiate an autoimmune response. While APECED is a monogenic disorder, disruption of AIRE function can have diverse implications: similar mutations in AIRE can lead to a myriad of phenotypes and symptoms. By investigating the multiple ways AIRE function can be compromised, recent research has uncovered the steadfast mechanisms explaining how AIRE is expressed in mTECs, how AIRE transactivates tissue-specific antigens (TSAs), and how those TSAs are presented to T cells by both medullary thymic epithelial cells (mTECs) and bone marrow-derived antigen-presenting cells. However, the stochastic nature of APECED symptoms remains. Therefore, new approaches to APECED therapy should investigate the intersection of pragmatism and randomness inherent in the relationship between central and peripheral tolerance.

Introduction

T cells provide capable, targeted defense against foreign antigens through their receptor specificity. The vast repertoire of T cell receptors allows the immune system to mount a response against most foreign invaders. Generation of receptor diversity is accomplished mainly through gene rearrangement at the alpha and beta chain loci.

Positive selection in the thymic cortex is able to expand T cell clones with receptors that bind major histocompatibility complex (MHC)/self-peptide complexes with at least moderate affinity (De Martino et al., 2013). However, cells that pass positive selection may still have a strong affinity for self-peptides presented on MHC molecules. In order to eliminate these autoreactive T cells from escaping from the thymus into the periphery, T cell clones positively selected for in the thymic cortex undergo negative selection in the thymic medulla. During the negative selection process, T cells are presented with medullary thymic epithelial cell (mTEC)-expressed tissue-specific antigens (TSAs) in the medulla (Derbinski, Schulte, Kyewski, & Klein, 2001; Kyewski & Derbinski, 2004). T cells that show strong affinity for these self-peptide/MHC complexes are deleted by activation-induced apoptosis. The deletion of autoreactive T cell clones through thymic-expressed TSAs is known as central tolerance.  

The discrepancy between antigens expressed and presented by cortical thymic epithelial cells (cTECs) versus mTECs has been termed the alternate peptide hypothesis. This hypothesis can partially explain how autoreactive T cells survive positive selection in the cortex but fail to pass negative selection in the medulla (Marrack, McCormack, & Kappler, 1989). In order to express TSAs, mTECs must transactivate genes that are not normally expressed in the thymus through a process called promiscuous gene expression (PGE; De Martino et al., 2013; Kyewski & Derbinski, 2004; Laan & Peterson, 2013; Metzger & Anderson, 2011; Tykocinski, Sinemus, & Kyewski, 2008). PGE is dependent upon the transcription of DNA in chromatin states often associated with inhibited expression (Abramson, Giraud, Benoist, & Mathis, 2010; Tykocinski et al., 2010; Ucar & Rattay, 2015; Žumer, Saksela, & Peterlin, 2013). The autoimmune regulator (AIRE) protein expressed in mTECs is a transcription factor that facilitates this process.

Loss of AIRE function limits TSA tolerance, leading to organ-specific autoimmunity and autoantibody production (Kisand & Peterson, 2015; Laan & Peterson, 2013; Metzger & Anderson, 2011). Autoimmune polyendocrinopathy candidiasis ectodermal dystrophy (APECED) is the monogenic disorder caused by mutations at the AIRE locus. However, APECED may be considered a syndrome because symptoms can also stem from indirect disruptions of AIRE function (De Martino et al., 2013). While APECED cases may feature some similar symptoms such as mucocutaneous candidiasis, Addison’s disease, and hypoparathyroidism, AIRE’s role in maintaining central tolerance to most TSAs makes APECED patients susceptible to further autoimmune responses against a number of host tissues including the eyes, liver, pancreas, kidney, and sex organs (Kisand & Peterson, 2015; Kyewski & Derbinski, 2004). Therefore, even APECED patients with similar mutations at the AIRE locus may have dissimilar symptoms due to variation in specific self-antigen tolerance (De Martino et al., 2013).

Because disruption of AIRE function can occur in numerous ways, this review will discuss how disruptions in AIRE expression, PGE promotion, and TSA presentation can all instigate autoimmunity. Furthermore, this review will explain how the interplay between central and peripheral tolerance contributes to the variation seen in APECED phenotypes and symptoms.   

mTEC Development, Epigenetic Profile, and miRNA Govern AIRE Expression

Expression of AIRE is vital to expression of self-antigens in the thymus. AIRE is predominantly expressed in mTECs, although other cell types in the periphery and thymus have been shown to express AIRE at low levels (Derbinski et al., 2001; Metzger & Anderson, 2011). While mTEC lineage cells are primarily responsible for PGE, only 1-5% of mTEC cells express TSAs at a given time (Gallegos & Bevan, 2004). It is likely that factors unlinked to AIRE expression levels, such as changes in signaling states within the medullary microenvironment, alter PGE in AIRE+ mTECs. Nonetheless, because AIRE plays a direct role in PGE, disruption of AIRE expression leads to autoimmune phenotypes.

Problems with AIRE expression in mTECs may arise from impediments to mature mTEC development. Immature mTECs begin at the MHC IIlow, CD80low, AIRE- stage and mature to an MHC IIhigh, CD80high, AIRE- stage and then an MHC IIhigh, CD80high, AIRE+ stage, at which point AIRE-dependent and AIRE-independent antigens can be expressed (Metzger & Anderson, 2011). While this three-step maturation process represents the development needed for mTECs to activate PGE, mTECs have recently been shown to lose their AIREphenotype during a fourth, and final, maturation stage. This loss of AIRE expression is coupled with a loss of TSA expression in these mature mTECs (Laan & Peterson, 2013; Yano et al., 2008). Therefore, while the paucity of TSA expression by the total mTEC population likely hinges on many factors, one possible contributor may be the finite timeframe in which AIRE is expressed by maturing mTECs.

While AIRE propels mTEC maturation, disruptions to early mTEC development stunt AIRE expression. AIRE-deficient mice produce malformed thymi, which illustrates the important role AIRE plays in thymic formation and mTEC development (Yano et al., 2008). However, failure of immature mTECs to pass proper developmental stages can have a substantial impact on AIRE expression and PGE in the thymus. For example, Rossi et al. (2007) show that RANK signaling from CD4+, CD3- cells facilitates mTEC development and promotes AIREphenotypes. Absence of RANK signaling was shown to incite autoimmunity (Rossi et al., 2007). Later experiments in vivo uncovered that RANK signaling regulates AIRE function by promoting its accumulation within chromatin-associated nuclear bodies (Ferguson et al., 2008).

Failure to express AIRE may also stem from improper epigenetic markers at the AIRE locus. One epigenetic marker that influences gene expression is methylation of DNA. Hypermethylation of DNA can occur in contiguous regions, such as CpG islands, and is associated with low expression rates. Bisulfate sequencing of CpG islands near the AIRE promoter revealed hypomethylation surrounding the AIRE promoter in AIREmTECS. However, these hypomethylation markers were also found in AIRE- immature (MHC IIlow) mTECs and cTECs, illustrating that AIRE promoter methylation likely has little effect on AIRE expression (Kont et al., 2011; Ucar & Rattay, 2015).

Alterations in the packing and chemical modification of chromatin can also dictate levels of gene expression. DNA is packed into nucleosomes, which contain DNA looped around octamers of histone proteins, similar to beads on a string. Tightly packed chromatin (heterochromatin) can sterically hinder transcriptional machinery from accessing promoters and other DNA sequences, making heterochromatin states unreceptive to gene expression. Conversely, loosely packed chromatin (euchromatin) is permissive of transcriptional machinery and gene expression. Furthermore, methylation of lysine residues on individual histone proteins can also promote or repress transcription. Histone profiling at the AIRE promoter in AIRE+ mTECs showed increased amounts of transcriptionally active histone marks (H3K4me3) and lower amounts of repressive histone marks (H3K27me3) than other cell types (Kont et al., 2011). These epigenetic patterns illustrate that AIRE expression is correlated with histone modifications at the promoter region. Failure to properly mark specific histone residues at the AIRE promoter may cause epigenetic silencing of AIRE, leading to decreased PGE and autoimmune phenotypes.

Expression of AIRE may be further regulated by miRNA interactions. Research by Ucar, Tykocinski, Dooley, Liston, and Kyewski (2013). revealed that miRNAs are tightly regulated in developing mTECs. In addition, mice lacking Dicer function showed loss of AIRE expression and reduced PGE, which demonstrates that miRNA regulates AIRE expression and function (Ucar et al., 2013). Therefore, loss of AIRE function may stem from an inability of miRNA to regulate AIRE activity.

Disruptions of AIRE or Other Promiscuous Gene Expression Mediators Yield Autoimmunity

AIRE contributes to central tolerance by enabling the expression of self-antigens within mTECs through PGE. Promoting TSA expression is a complex process, and AIRE is able to facilitate PGE through its unique protein domains, which allow for subcellular localization and interaction with other proteins that assist in the transcription and processing of TSAs (Abramson et al., 2010; De Martino et al., 2013; Gallo et al., 2013; Ramsey, Bukrinsky, & Peltonen, 2002). Taken together, mutations in the AIRE locus compromise the function of AIRE protein domains and lead to nonfunctional PGE.

In order to facilitate the transcription of TSAs not canonically expressed in the thymus, AIRE must localize to genes that are epigenetically repressed. AIRE protein domains allow it to access repressive chromatin states and transactivate TSA expression. For example, a dominant missense mutation in the SAND domain inhibited PGE in heterozygous mice by impeding localization of AIRE proteins encoded by both alleles to nuclear bodies. This mutation was sufficient to prompt an autoimmune phenotype (Su et al., 2008). Mutations in the CARD domain limited AIRE homodimerization and nuclear localization in vitro (Ferguson et al., 2008; Metzger & Anderson, 2011). Mutations leading to elimination of the AIRE C-terminus barred TSA expression by preventing AIRE from interacting with positive transcription elongation factor B (P-TEFb; Žumer, Plemenitaš, Saksela, & Peterlin, 2011). Synthetic mutations in the PHD domain revealed that the BHC80 region of AIRE’s PHD1 domain is vital for localization to nucleosomes. The PHD1 domain is a protein-binding zinc finger, which can bind hypomethylated H3K4, a traditionally repressive histone mark, in order to allow transcription within regions of heterochromatin (Anderson & Su, 2016). While AIRE binding of hypomethylated histone H3 tails was necessary for PGE, overexpression of H3K4-demethylase did not increase PGE, indicating that AIRE’s targets other epigenetic modifications (Koh, Kingston, Benoist, & Mathis, 2010). This hypothesis was supported by Waterfield et al. (2014)who used a screening approach to demonstrate that AIRE interacts with MBD1on its SAND domain. MDB1 is able to bind methylated CpG dinucleotides, which allows AIRE to localize to genes located within hypermethylated CpG islands (Waterfield et al., 2014).

Subcellular localization of AIRE to epigenetically-repressed sites via its protein domains is necessary for AIRE to facilitate the transcription of TSA genes. However, further protein-protein interactions also contribute to TSA transactivation. Because of AIRE’s integral role in facilitating a process that breaks conventional guidelines of gene regulation, it may be assumed that AIRE acts as a “pioneer protein,” which recruits RNA Polymerase II to TSA loci amidst a jumble of heterochromatin and other repressive epigenetic marks. However, Giraud et al.  (2012) showed that the absence of AIRE did not inhibit expression of the first exon in AIRE-targeted genes. This illustrates that AIRE is not necessary for RNA Polymerase II to access epigenetically-repressed loci. Instead, RNA Polymerase II can be recruited to these sites by DNA-Dependent Protein Kinase (DNA-PK) in response to double stranded breaks caused by Topoisomerase II activity. RNA Polymerase II is then able to begin transcription of the first exon, but elongation is halted by negative elongation factors. AIRE also interacts with DNA-PK, which allows it to co-localize with RNA Polymerase II. After co-localization, AIRE’s interaction with P-TEFb prompts RNA polymerase II phosphorylation and transcriptional elongation (Žumer et al., 2013). Therefore, instead of initiating transcription at TSA loci, AIRE works to promote TSA transcription by unleashing RNA Polymerase II in order to transcribe downstream exons (Giraud et al., 2012). AIRE localization to double stranded break repair sites via DNA-PK provides a viable explanation to how AIRE accesses epigenetically-repressed TSA loci. However, as illustrated above, mutations to multiple AIRE protein domains have also been shown to inhibit subcellular localization and provoke autoimmune phenotypes. Further research will need to define whether these various methods of TSA localization work in tandem or in isolation to induce PGE.

AIRE also regulates TSA output via post-translational mRNA splicing (Kyewski & Derbinski, 2004; Žumer et al., 2011). mTECs had the greatest amount of alternatively spliced isoforms compared to any other cell type (Keane, Ceredig, & Seoighe, 2015). AIRE is thought to recruit splicing machinery in multiple ways. For example, splice factor snRNP is known to localize to nuclear bodies, (Sleeman & Lamond, 1999) where AIRE is also recruited via its SAND domain (Ramsey et al., 2002). Furthermore, Zumer et al. (2011) showed that snRNP subunit U5 was recruited by AIRE to the 3’ end of TSA transcripts. Therefore, AIRE utilizes co-localization with RNA Polymerase II to promote mRNA splicing (Žumer et al., 2011). AIRE is thought to perform mRNA splicing in order to tolerize autoreactive T cells specific to particular TSA isoforms, thus increasing the breadth of clonal deletion in the thymus (Keane et al., 2015; Kyewski & Derbinski, 2004). Therefore, disruption of mRNA splicing mechanics may limit the breadth of PGE expression, leading to autoimmunity of specific self-peptide isoforms.

mTECs and Thymic Dendritic Cells Present Tissue-Specific Antigens to Induce Tolerance

While TSA expression in mTECs is necessary for negative selection, central tolerance can be accomplished only if those TSAs are presented to T cells via MHC molecules. Therefore, failure to regulate TSA presentation on thymic cell types may undermine AIRE function and cause autoimmune phenotypes. Although mTECs have the proper surface molecules to initiate activation-induced apoptosis in both CD4+ and CD8+ T cells (Laan & Peterson, 2013), mTECs share these presenting responsibilities with thymic dendritic cells. These dendritic cells can be recruited to the thymus through the XC-chemokine ligand 1 (XCL1), a protein that is expressed by AIREmTECs (Anderson & Su, 2016). It should be noted that thymic dendritic cells do not express AIRE and do not perform PGE (Derbinski et al., 2001). Therefore, mTECs serve as TSA reservoirs, and can selectively pass off PGE products to thymic dendritic cells for presentation (Gallegos & Bevan, 2004; Hubert et al., 2011; Metzger & Anderson, 2011).

By regulating thymic and bone marrow expression of ovalbumin (OVA) peptide and MHC I/II, respectively, in mouse models, researchers have investigated whether mTECs are self-sufficient at inducing autoreactive CD4+ and/or CD8+ T cell deletion through TSA presentation. TSAs produced by mTECs are intracellular proteins, and should therefore be canonically presented by MHC I to CD8+ T cells; mTEC presentation to CD4+ T cells would require cross-presentation of intracellular TSAs to MHC II. Gallegos and Bevan hypothesized that because mTECs were insufficient at antigen presentation, thymic dendritic cells were responsible for presentation to CD4+ and CD8+ T cells. Their results indicated that mTECs self-sufficiently induced CD8+ T cell tolerance to mOVA, but bone-marrow derived thymic dendritic cells were necessary for tolerance of mOVA-specific CD4+ T cells (Gallegos & Bevan, 2004). However, recent evidence has qualified those findings, asserting that mTECs are responsible for some TSA antigen via MHC II, but induction of CD4+ OVA tolerance is greatly diminished in mice with MHC II-deficient bone marrow (Hubert et al., 2011).

While inducing thymic expression of OVA through knock-in experiments spotlights the presentation responsibilities between mTECs and thymic dendritic cells for one non-self peptide, how presentation of numerous, specific TSAs is delineated between mTECs and thymic dendritic cells for comprehensive tolerance induction remains unknown. Research by Zhang et al. (2003) showed that soluble hen egg lysozyme(HEL) expression in the thymus produced more efficient negative selection of CD4+ thymocytes than membrane-bound HEL, suggesting mTEC secretion of peptides to thymic dendritic cells is important for tolerance induction. However, because autoreactive T cells are prone to interact with membrane-bound molecules on the surface of tissues, uncovering how shared presentation responsibilities ensure full tolerance to all self-peptides is an important step to uncovering more about negative selection mechanics.

Peripheral Tolerance Drives Variability in APECED Symptomatology

The disparities found in APECED symptomatology stem from the limited power of peripheral tolerance. Mechanisms of peripheral tolerance inactivate autoreactive lymphocytes that have escaped central tolerance during T lymphocyte development. For example, immature dendritic cells in the periphery are responsible for induction of tolerance to self-antigens under steady-state conditions (Hawiger et al., 2001; Mueller, 2010). Dendritic cells in both lymph nodes and the spleen can process, load, and present self-antigens from the periphery to T cells. Thus, expression of certain antigens in the periphery is sufficient to induce tolerance of those antigens (Derbinski et al., 2001). Additionally, certain dendritic cells express limited amounts of AIRE. These extra-thymic AIRE-expressing cells (eTACs) may provide additional tolerance in basal conditions by presenting AIRE-dependent self-antigens in the periphery (Metzger & Anderson, 2011; Mueller, 2010). eTACs lack costimulatory molecules CD80/86, which may induce anergy in T cells that recognize eTAC-presented peptides (Metzger & Anderson, 2011). However, it is likely that eTAC levels are minimal in APECED patients.

Peripheral tolerance is also formed by regulatory T cells (CD4+, FOXP3+, CD25+), which induce anergy to helper and cytotoxic T cells through direct interaction, releasing anti-inflammatory signals, and expending cytokines that potentiate T cell activation and proliferation. T cells in the thymus may be pushed to the thymic regulatory T cell lineage if they bind MHC/self-peptide complexes with strong affinity during negative selection in the thymus (Jordan et al., 2001). Induced regulatory T cells may be induced to undergo lineage commitment in the periphery through receptor activation and epigenetic change at the FOXP3 locus (Ohkura et al., 2012). Because AIRE deficiency hinders negative selection through dysfunctional PGE, APECED patients possess limited regulatory T cell populations, likely due to the inability to facilitate thymic regulatory T cell lineage commitment (Kekäläinen et al., 2007; Perry et al., 2014). However, induced regulatory T cells may play a role in muffling the autoimmune response in some tissues.

The role of peripheral tolerance mechanisms to silence autoimmunity in host tissues causes the variability of APECED phenotypes. The stochastic nature of T cell receptor gene rearrangement leads to a diverse potential of peripheral autoreactive T cells in AIRE-deficient individuals (Kisand & Peterson, 2015). Peripheral tolerance serves to filter out those autoreactive T cell responses, but because the peripheral filter is imperfect, the list of specific autoreactive T cell clones left unconstrained is unpredictable (Figure 1). For example, the self-peptides available for dendritic cells to uptake and present may depend on random circumstance, leaving the peripheral tolerance of specific tissues up to chance. Furthermore, variability in the activation and recruitment of specific induced regulatory T cell clones further confounds which autoreactive helper and cytotoxic T cells will cause host tissue damage. Still other factors, such as the amount of costimulatory molecules and activation-inducing cytokines present in a given tissue, play further roles (Klein & Kyewski, 2000). Therefore, while dysfunctional PGE in the thymus is sufficient to promote a myriad of autoreactive T cells in the periphery of APECED patients, the variable phenotypes associated with the disorder result from the stochastic mechanisms of peripheral tolerance used to neutralize autoreactive activity.

Figure 1

Figure 1. The relationship between central and peripheral tolerance determines the profile of autoreactive T cells in the periphery. Positive selection expands T cell clones that garner a signal from MHC/self-peptide complexes. Negative selection filters out autoreactive T cell clones that bind with high affinity to MHC/tissue-specific antigen complexes. Inability to perform negative selection permits autoreactive T cell clones into the periphery. Peripheral tolerance suppresses a limited number of autoreactive T cell responses and, in cases of APECED, dictates the specific autoimmune symptoms of the patient.

Conclusions and Future Directions

Because of AIRE’s central role in facilitating PGE, lack of central tolerance is intrinsic in every APECED phenotype (Figure 1). However, current treatment options do not remedy issues with central tolerance. Instead, treatments of APECED focus on maintaining tissue function and suppressing immune system responses through anti-inflammatory drugs. These treatments are often ineffective in limiting the autoimmune responses (Kisand & Peterson, 2015). While successful constitution of central tolerance in APECED patients would cure their symptoms, the complexity of the mechanisms involved in AIRE expression, PGE, and TSA presentation poses significant obstacles to targeting central tolerance therapeutically. Therefore, treatments for APECED patients could instead utilize the suppressive mechanisms of peripheral tolerance.

Peripheral tolerance is an effective suppressor of autoimmune responses. Despite the diversity of autoreactive T cells in the periphery of APECED patients, typical patients experience autoimmune responses to only a limited number of tissues (Figure 1; Kisand & Peterson, 2015). This is because peripheral tolerance is responsible for suppressing the activation, proliferation, and activity of autoreactive T cells. As such, an autoimmune response to any self-antigen can be thought of as a failure of peripheral tolerance to protect that antigen from immune targeting.

Identifying the autoimmune responses in each APECED patient inherently identifies the limits of peripheral tolerance in that individual. Therefore, new therapeutic efforts for APECED could address breaches in peripheral tolerance in a symptom-specific manner: patients would receive therapy that would induce peripheral tolerance to the tissues under attack. This treatment might be accomplished by introducing the self-antigens of interest to secondary lymphoid organs, where immature dendritic cells may tolerize peripheral T cells specific to those antigens; a 2012 study showed that mice injected with microparticles decorated with a specific antigen induced long term tolerance of T cells specific to that antigen (Getts et al., 2012). Additionally, transplanting tissue-specific regulatory T cells into the periphery may promote anergy to a given tissue. Many regulatory T cell-based therapeutic studies are currently in clinical trials, and future studies may utilize specific MHC/peptide combinations to isolate and expand antigen-specific regulatory T cells (Khor, 2016). While these methods of inducing tissue-specific peripheral tolerance are far from developed, they provide the potential to overcome the variability associated with both the causes and symptoms of APECED.

References

Abramson, J., Giraud, M., Benoist, C., & Mathis, D. (2010). Aire’s Partners in the Molecular Control of Immunological Tolerance Cell, 140(1), 123–135. doi:10.1016/j.cell.2009.12.030

Anderson, M. S., & Su, M. A. (2016). AIRE expands: new roles in immune tolerance and beyond. Nature Reviews Immunology, 16(4), 247–258. doi:10.1038/nri.2016.9

De Martino, L., Capalbo, D., Improda, N., D’Elia, F., Di Mase, R., D’Assante, R., . . . Salerno, M. (2013). APECED: A Paradigm of Complex Interactions between Genetic Background and Susceptibility Factors. Frontiers in Immunology, 4. doi:10.3389/fimmu.2013.00331

Derbinski, J., Schulte, A., Kyewski, B., & Klein, L. (2001). Promiscuous gene expression in medullary thymic epithelial cells mirrors the peripheral self. Nature Immunology, 2(11), 1032.

Ferguson, B. J., Alexander, C., Rossi, S. W., Liiv, I., Rebane, A., Worth, C. L., . . .Rich, T. (2008). AIRE’s CARD Revealed, a New Structure for Central Tolerance Provokes Transcriptional Plasticity. Journal of Biological Chemistry, 283(3), 1723–1731. doi:10.1074/jbc.M707211200

Gallegos, A. M., & Bevan, M. J. (2004). Central Tolerance to Tissue-specific Antigens Mediated by Direct and Indirect Antigen Presentation. The Journal of Experimental Medicine, 200(8), 1039–1049. doi:10.1084/jem.20041457

Gallo, V., Giardino, G., Capalbo, D., Palamaro, L., Romano, R., Santamaria, F., . . . Pignata, C. (2013). Alterations of the autoimmune regulator transcription factor and failure of central tolerance: APECED as a model. Expert Review of Clinical Immunology, 9(1), 43–51. doi:10.1586/eci.12.88

Getts, D. R., Martin, A. J., McCarthy, D. P., Terry, R. L., Hunter, Z. N., Yap, W. T., . . . Miller, S. D. (2012). Microparticles bearing encephalitogenic peptides induce T-cell tolerance and ameliorate experimental autoimmune encephalomyelitis. Nature Biotechnology, 30(12), 1217–1224. doi:10.1038/nbt.2434

Giraud, M., Yoshida, H., Abramson, J., Rahl, P. B., Young, R. A., Mathis, D., & Benoist, C. (2012). Aire unleashes stalled RNA polymerase to induce ectopic gene expression in thymic epithelial cells. Proceedings of the National Academy of Sciences, 109(2), 535–540. doi:10.1073/pnas.1119351109

Hawiger, D., Inaba, K., Dorsett, Y., Guo, M., Mahnke, K., Rivera, M., . . . Nussenzweig, M. C. (2001). Dendritic Cells Induce Peripheral T Cell Unresponsiveness under Steady State Conditions in Vivo. The Journal of Experimental Medicine, 194(6), 769–780. doi:10.1084/jem.194.6.769

Hubert, F.-X., Kinkel, S. A., Davey, G. M., Phipson, B., Mueller, S. N., Liston, A., . . . Heath, W. R. (2011). Aire regulates the transfer of antigen from mTECs to dendritic cells for induction of thymic tolerance. Blood, 118(9), 2462–2472. doi:10.1182/blood-2010-06-286393

Jordan, M. S., Boesteanu, A., Reed, A. J., Petrone, A. L., Holenbeck, A. E., Lerman, M. A., . . . Caton, A. J. (2001). Thymic selection of CD4+CD25+ regulatory T cells induced by an agonist self-peptide. Nature Immunology, 2(4), 301.

Keane, P., Ceredig, R., & Seoighe, C. (2015). Promiscuous mRNA splicing under the control of AIRE in medullary thymic epithelial cells. Bioinformatics, 31(7), 986–990. doi:10.1093/bioinformatics/btu785

Kekäläinen, E., Tuovinen, H., Joensuu, J., Gylling, M., Franssila, R., Pöntynen, N., . . . Arstila, T. P. (2007). A Defect of Regulatory T Cells in Patients with Autoimmune Polyendocrinopathy-Candidiasis-Ectodermal Dystrophy. The Journal of Immunology, 178(2), 1208–1215. doi:10.4049/jimmunol.178.2.1208

Khor, B. (2016). Regulatory T cells: Central concepts from ontogeny to therapy. Transfusion Medicine Reviews, 0(0). doi:10.1016/j.tmrv.2016.07.003

Kisand, K., & Peterson, P. (2015). Autoimmune Polyendocrinopathy Candidiasis Ectodermal Dystrophy. Journal of Clinical Immunology, 35(5), 463–478. doi:10.1007/s10875-015-0176-y

Klein, L., & Kyewski, B. (2000). Promiscuous expression of tissue antigens in the thymus: a key to T-cell tolerance and autoimmunity? Journal of Molecular Medicine, 78(9), 483–494.

Koh, A. S., Kingston, R. E., Benoist, C., & Mathis, D. (2010). Global relevance of Aire binding to hypomethylated lysine-4 of histone-3. Proceedings of the National Academy of Sciences, 107(29), 13016–13021. doi:10.1073/pnas.1004436107

Kont, V., Murumägi, A., Tykocinski, L.-O., Kinkel, S. A., Webster, K. E., Kisand, K., . . . Peterson, P. (2011). DNA methylation signatures of the AIRE promoter in thymic epithelial cells, thymomas and normal tissues. Molecular Immunology, 49(3), 518–526.

Kyewski, B., & Derbinski, J. (2004). Self-representation in the thymus: an extended view. Nature Reviews Immunology, 4(9), 688–698. doi:10.1038/nri1436

Laan, M., & Peterson, P. (2013). The Many Faces of Aire in Central Tolerance. Frontiers in Immunology, 4. doi:10.3389/fimmu.2013.00326

Marrack, P., McCormack, J., & Kappler, J. (1989). Presentation of antigen, foreign major histocompatibility complex proteins and self by thymus cortical epithelium. Nature, 338(6215), 503–505. doi:10.1038/338503a0

Metzger, T. C., & Anderson, M. S. (2011). Control of central and peripheral tolerance by Aire. Immunological Reviews, 241(1), 89–103. doi:10.1111/j.1600-065X.2011.01008.x

Mueller, D. L. (2010). Mechanisms maintaining peripheral tolerance. Nature Immunology, 11(1), 21–27. doi:10.1038/ni.1817

Ohkura, N., Hamaguchi, M., Morikawa, H., Sugimura, K., Tanaka, A., Ito, Y., . . . Sakaguchi, S. (2012). T Cell Receptor Stimulation-Induced Epigenetic Changes and Foxp3 Expression Are Independent and Complementary Events Required for Treg Cell Development. Immunity, 37(5), 785–799. doi:10.1016/j.immuni.2012.09.010

Perry, J. S. A., Lio, C.-W. J., Kau, A. L., Nutsch, K., Yang, Z., Gordon, J. I., . . . Hsieh, C.-S. (2014). Distinct contributions of Aire and antigen presenting cell subsets to the generation of self-tolerance in the thymus. Immunity, 41(3), 414–426. doi:10.1016/j.immuni.2014.08.007

Ramsey, C., Bukrinsky, A., & Peltonen, L. (2002). Systematic mutagenesis of the functional domains of AIRE reveals their role in intracellular targeting. Human Molecular Genetics, 11(26), 3299–3308. doi:10.1093/hmg/11.26.3299

Rossi, S. W., Kim, M.-Y., Leibbrandt, A., Parnell, S. M., Jenkinson, W. E., Glanville, S. H., . . . Anderson, G. (2007). RANK signals from CD4+3− inducer cells regulate development of Aire-expressing epithelial cells in the thymic medulla. The Journal of Experimental Medicine, 204(6), 1267–1272. doi:10.1084/jem.20062497

Sleeman, J. E., & Lamond, A. I. (1999). Newly assembled snRNPs associate with coiled bodies before speckles, suggesting a nuclear snRNP maturation pathway. Current Biology, 9(19), 1065–1074. doi:10.1016/S0960-9822(99)80475-8

Su, M. A., Giang, K., Žumer, K., Jiang, H., Oven, I., Rinn, J. L., . . . Anderson, M. S. (2008). Mechanisms of an autoimmunity syndrome in mice caused by a dominant mutation in Aire. The Journal of Clinical Investigation, 118(5), 1712–1726. doi:10.1172/JCI34523

Tykocinski, L.-O., Sinemus, A., & Kyewski, B. (2008). The Thymus Medulla Slowly Yields Its Secrets. Annals of the New York Academy of Sciences, 1143(1), 105–122. doi:10.1196/annals.1443.018

Tykocinski, L.-O., Sinemus, A., Rezavandy, E., Weiland, Y., Baddeley, D., Cremer, C., . . . Kyewski, B. (2010). Epigenetic regulation of promiscuous gene expression in thymic medullary epithelial cells. Proceedings of the National Academy of Sciences, 107(45), 19426–19431. doi:10.1073/pnas.1009265107

Ucar, O., & Rattay, K. (2015). Promiscuous Gene Expression in the Thymus: A Matter of Epigenetics, miRNA, and More? Frontiers in Immunology, 6. doi:10.3389/fimmu.2015.00093

Ucar, O., Tykocinski, L.-O., Dooley, J., Liston, A., & Kyewski, B. (2013). An evolutionarily conserved mutual interdependence between Aire and microRNAs in promiscuous gene expression. European Journal of Immunology, 43(7), 1769–1778. doi:10.1002/eji.201343343

Waterfield, M., Khan, I. S., Cortez, J. T., Fan, U., Metzger, T., Greer, A., . . . Anderson, M. S. (2014). The transcriptional regulator Aire coopts the repressive ATF7ip-MBD1 complex for the induction of immunotolerance. Nature Immunology, 15(3), 258–265. doi:10.1038/ni.2820

Yano, M., Kuroda, N., Han, H., Meguro-Horike, M., Nishikawa, Y., Kiyonari, H., . . . Matsumoto, M. (2008). Aire controls the differentiation program of thymic epithelial cells in the medulla for the establishment of self-tolerance. The Journal of Experimental Medicine, 205(12), 2827–2838. doi:10.1084/jem.20080046

Zhang, M., Vacchio, M. S., Vistica, B. P., Lesage, S., Egwuagu, C. E., Yu, C.-R., . . . Gery, I. (2003). T Cell Tolerance to a Neo-Self Antigen Expressed by Thymic Epithelial Cells: The Soluble Form is More Effective Than the Membrane-Bound Form. The Journal of Immunology, 170(8), 3954-3962. doi:10.4049/jimmunol.170.8.3954

Žumer, K., Plemenitaš, A., Saksela, K., & Peterlin, B. M. (2011). Patient mutation in AIRE disrupts P-TEFb binding and target gene transcription. Nucleic Acids Research, 39(18), 7908–7919. doi:10.1093/nar/gkr527

Žumer, K., Saksela, K., & Peterlin, B. M. (2013). The Mechanism of Tissue-Restricted Antigen Gene Expression by AIRE. The Journal of Immunology, 190(6), 2479–2482. doi:10.4049/jimmunol.1203210

Access Full PDF: AIRE Deficiency Exposes Inefficiencies of Peripheral Tolerance Leading to Variable APECED Phenotypes

Exploring Carry-Over Effects to Elucidate Attention Bias Modification’s Mixed Results

doi:10.22186/jyi.31.3.9-14
 

Abstract

Attention bias modification (ABM) has been shown to decrease self-reported symptom severity for those with social anxiety disorder (SAD). ABM may also decrease attention bias towards threat present in SAD. Currently, the most prominent form of ABM is a modified dot-probe paradigm that uses two affective/emotional faces to measure or train attention bias. Results are mixed in previous studies regarding the ability of ABM to alter attention bias. Carry-over effects from trial to trial may help determine the origin of these variations, as the presence of carry-over effects may alter the component of attention measured in each dot-probe trial. In this study, 85 subjects were randomly assigned to three ABM training conditions: attend positive, attend threat, and a control training. Before and after ABM, attention bias scores were measured. No carry-over effects for any group or ABM condition were found. In addition, attention bias scores did not differ between groups or training conditions at initial assessment and the ABM training did not alter attention bias scores. Although our findings do not support the role of carry-over effects, it is possible that carry-over effects may help elucidate the role of attention bias both in the dot-probe paradigm and in SAD. Thus, carry-over effects should be considered in dot-probe data analysis.

Introduction

Social Anxiety Disorder (SAD) is characterized by excessive worry and self-consciousness surrounding social situations due to significant discomfort, negative internal evaluations, or anticipated embarrassment (American Psychological Association, 2013). SAD is a debilitating disorder which leads to significant impairment in work and social situations. Annually, about 7% of the population meets DSM-V criteria for SAD and it accounts for 10% to 20% of people with anxiety disorders in outpatient settings (American Psychological Association, 2013; Ruscio et al., 2008). Not surprisingly, treatment for this disorder is in high demand.

In an effort to reduce anxiety symptoms, attention bias modification (ABM) has been explored as a potential computer-based treatment for SAD designed to alter how people process and pay attention to social information (Amir et al., 2008; Heeren, Lievens, & Philippot, 2011; Klumpp & Amir, 2010). ABM is a modified dot-probe paradigm using affective/emotional stimuli such as facial expressions or words. During the modified dot-probe task, two stimuli appear on the screen for less than 1000 milliseconds, after which a target (letter or symbol) replaces one of the stimuli. The participant is asked to respond to the target as quickly and accurately as possible. The trial is named for the type of stimulus being replaced by the target (i.e., if a threat stimulus is replaced, the trial is a threat trial).

In the literature, three ABM conditions have been tested: attend positive (positive stimulus is more often replaced by a target), attend threat (negative stimulus is more often replaced by a target), and a no-training control (stimulus types are replaced by a target with equal probability; Frewen, Dozois, Joanisse, & Neufeld, 2008; Klumpp & Amir, 2010). An attention bias, the tendency to attend to a certain type of information over other types of information, can be determined by calculating the difference in reaction times between threat trials and positive trials (Amir et al., 2009; Izetelny, 2006). This calculation is most commonly carried out using data from an assessment task, which is often a dot-probe task similar to the control training (Boettcher et al., 2013; White, Suway, Pine, Bar-Haim, & Fox, 2011).

Individuals with SAD often demonstrate an attention bias to threat in these tasks (Bogels & Mansell, 2004). ABM has been developed in an effort to reduce this attention bias. In the dot-probe paradigm, attention bias manifests itself in reaction times. For example, if a SAD subject has an attention bias towards threat, the subject is, on average, quicker to respond to threat trials than to positive trials. This occurs because the subject’s attention is immediately drawn to the threatening stimuli. When the stimulus is replaced with an arrow, the subject will then respond more quickly.

However, previous findings on the subject are mixed, with some studies reporting no bias among SAD subjects (Boettcher et al., 2013; Bradley et al., 1997) or even a bias away from threat (Pittig, Pawlikowski, Craske, & Alpers, 2014; Harrison & Turpin, 2003). This complicates the literature on ABM to date, as individual attention bias may interact with ABM tasks to produce inconsistent results.

A question in the literature that has not yet been adequately addressed is how ABM modifies attention bias. Studies have shown that it is possible to modify attention bias towards the direction of the training (Li, Tan, Qian, & Liu, 2008; Amir et al., 2009; White et al., 2011). In contrast, Heeren et al., (2011) and Boettcher et al., (2013) report that neither type of ABM, either towards or away from threat, was able to modify attention bias. With all of these conflicting studies, it is difficult to identify a clear mechanism underlying ABM effects.

One way to further explore the underlying mechanism of ABM is to investigate carry-over effects. Carry-over effect is the tendency of one stimulus to have an influence over the evaluation of the next stimulus (Nonyane & Theobald, 2007). Two important carry-over effects are assimilation and contrast. Assimilation occurs when the previous stimulus is congruent with the current stimulus (e.g., a threat trial follows another threat trial). Contrast occurs when the trials are incongruent (e.g., a threat trial follows a positive trial).

The majority of carry-over effects are studied in the emotional Stroop task due to its multiple formats (Holle, Neely, & Heimberg, 1997; Waters, Sayette, Franken, & Schwartz, 2005). Similar to the modified dot-probe task, the emotional Stroop task measures attention through reaction time. One type of Stroop task, blocked, involves presenting the same type of stimuli in a series or block and represents assimilation carry-over effects. The blocked emotional Stroop task has been linked to attention maintenance (maintaining focused attention) in SAD and healthy controls through assimilation effects (Holle et al., 1997; Jones-Chester, Monsell, & Cooper, 1998; Waters and Feyerabend, 2000; Waters et al., 2005). The second type of Stroop task, mixed, involves presenting different stimulus types in a random order and represents contrast carry-over effects. The mixed emotional Stroop task has been linked to orienting of attention (i.e., initially focusing one’s attention) in SAD and healthy controls through contrast effects (Holle et al., 1997; Jones-Chester et al., 1998; Waters and Feyerabend, 2000; Waters et al., 2005). This task may generalize to social experiences, in that a blocked format Stroop task represents a cue-rich environment such as a party, while a randomized format Stroop task represents a mix of social and non-social cues more similar to a single interpersonal interaction (Waters et al., 2005).

Although the dot-probe task uses a mixed format, assimilation and contrast effects may be measured. This is possible because carry-over effects may be simplified to a single previous trial rather than multiple previous trials (Aguirre, 2007). Thus, assimilation effects in the ABM paradigm occur on congruent trials, where the stimulus replaced by the target is the same emotion as the stimulus replaced during the previous trial. Contrast effects occur during incongruent trials, where the stimulus replaced by the target is different than that replaced in the previous trial. These effects have not previously been studied in the ABM paradigm. In addition, behavioral biases from carry-over effects are shown to be sensitive to task context, so a generalization between task and stimulus may not always be made (Aguirre, 2007). Accordingly carry over effects must be studied not only in the Stroop task but in ABM as well.

The purpose of this study was to investigate why ABM alters attention bias in some cases but not in others. This research will help determine the role of attention bias in the dot-probe paradigm and ABM training. It is possible that the modified dot-probe can be linked to the two components of attention in the same way as the emotional Stroop task, since both tasks contain the same carry-over effects. Klumpp and Amir (2010) and O’Toole and Dennis (2012) were among the first to introduce and test the idea that attention switching/disengaging from threat is the effective component of ABM. This same disengagement from threat stimuli in those with SAD is key to good performance, faster and more accurate responses, on the Stroop task as well (Cisler, Bacon, & Williams, 2009). Although blocked and mixed formats only exist in the Stroop task, dot-probe tasks simulate these with congruent and incongruent trials explained above. Therefore, we examined carry-over effects during dot-probe assessments, before and after ABM, to determine whether carry-over effects were linked to attention bias scores. If carry over effects are linked to attention bias scores the proceeding stimulus will affect the current stimulus and reaction times will differ for congruent or incongruent trial sequences ending in the same trial type (threat or positive).

Methods

Participants

Participants (N=85) were recruited using flyers and online advertisements posted at a large university medical center and the surrounding community. Members of the surrounding community and patients seeking treatment at the university-affiliated outpatient anxiety clinic responded to the advertisements to express their interest in participating. The study protocol was approved by the Institutional Review Board at the University of Michigan Medical School. Participants with SAD and healthy controls (HC) were recruited through telephone and in-person diagnostic screenings with the Mini International Neuropsychiatric Interview (MINI; Sheehan et al., 1998), the Liebowitz Social Anxiety Scale (LSAS; Liebowitz, 1987), the Social Interaction Anxiety Scale (SIAS; Brown, Turvosky, Heimburg, Juster, Brown, & Barlow, 1997), and the Beck Depression Inventory (BDI-II; Beck, Steer, Ball, & Ranieri 1996). A licensed clinical psychologist performed all diagnostic assessments. All SAD participants had a current and primary SAD diagnosis based on the diagnostic assessment. HC  were free of current or past diagnosis of Axis I or II disorders. All participants were at least 18 years of age and provided written informed consent.

Participants with the following were excluded from the study: a) clinically significant medical or neurologic condition; b) primary psychiatric disorder other than SAD; c) life history of schizophrenia or bipolar disorder; d) current major depressive disorder; e) alcohol/drug abuse or dependence in the past year; f) current suicidal ideation; g) presence of an organic mental syndrome, mental retardation, or pervasive developmental disorder; h) unwilling/unable to sign the informed consent document; and i) current psychiatric medication use other than selective serotonin reuptake inhibitor (SSRI). SSRI medication at a stable dose for three months was permitted in the SAD group. Medications for minor conditions that would not directly affect measures related to the study (birth control, supplements, etc.) were also permitted. Beta blockers, stimulants, opioids, and other medications with known effects on sympathetic/parasympathetic functioning and/or attentional processing were not permitted. Current unipolar major depression or drug/alcohol abuse/dependence was an exclusion criterion for this study, but a past history of these conditions was not. Since these conditions are frequently present in SAD participants, exclusion of such participants would yield a biased and unrepresentative sample of the SAD population. 

Procedure

After written informed consent was obtained, each participant was randomly assigned to one of three conditions (attend positive, attend threat, or control). Participants completed all tasks and self-report measures in the lab with researcher supervision over two visits. During the first visit, participants were screened for participation criteria and completed all self-report paper measures (LSAS, SIAS, BDI). During the second visit, participants completed three dot-probe tasks, including pre-assessment, ABM training, and post-assessment. Although not included in these analyses, self-report paper measures were repeated following ABM training.

Attention Bias Assessment

The attention bias assessment task occurred before and after ABM to assess attention bias. The assessment task was a modified dot-probe task consisting of 94 trials. Participants were presented with a fixation cross for 500ms followed by two affective faces, one positive (happy) and one negative (angry), for 500ms. After the faces disappeared, one face was replaced by an arrow. Participants were instructed to respond to the direction of the arrow (up or down), by pressing the arrow keys on a standard keyboard, as “quickly and accurately as possible”. The next trial began after a response was made. During this task, the positive and negative faces were replaced by the arrow with equal probability (50%). Reaction time and accuracy, used to determine the location of the participant’s attention, were recorded on each trial. Incorrect trials and trials where the reaction time was greater than three standard deviations from the mean of each participant were excluded.

Attention Bias Modification (ABM)

Prior to training, the participants were told researchers were testing a computer-based attention training program that could alter the way people process and pay attention to social information, which may help reduce SAD symptoms. ABM followed the same format as the assessment task. The only differences between the assessment and ABM was the face replacement ratio and the length. We employed three ABM training conditions: attend positive (arrow replaced the positive face 80% of the time), attend threat (arrow replaced the negative face 80% of the time), and control (both face types were replaced with equal probability). Each training session consisted of 744 trials. Note that the control ABM training was identical to the assessment task, excluding the number of trials, in order to control the impact of viewing faces and engaging in the task without manipulating attentional direction. All faces used in both ABM and the assessments were from the Pictures of Facial Affect (Ekman & Friesen, 1976), a widely used standardized face set.

Data Analysis

All analyses were conducted with International Business Machines Corporation’s (IBM) Statistical Package for the Social Sciences (SPSS v. 21). The threshold for significance was set at an alpha level of 0.05. Attention bias was calculated by subtracting the average reaction time on threat trials (arrow replaced the threat face) from the average reaction time on positive trials (arrow replaced the positive face). To examine carry-over effects, each trial was categorized based on not only the current trial but also the trial that preceded it. As seen in Figure 1, threat to positive occurred when the current trial is a positive trial and was preceded by a threat trial.

A 2 (time; pre-ABM, post-ABM) X 4 (trial type; positive to positive, threat to threat, positive to threat, threat to positive) X 2 (group; HC, SAD) X 3 (ABM condition; toward threat, toward positive, control) mixed ANOVA with time and trial type as within-subjects variables and group and condition as between-subjects variables was run with reaction time as the dependent variable. This analysis examined how the congruence of the arrow replacement on the previous trial affected reaction time on the next trial for different face types in HC and SAD.

To examine initial differences in HC and SAD groups, a t-test comparing mean pre-ABM bias scores in SAD versus HC was conducted. Finally, to determine if attention bias changed as a function of group or condition, we conducted a 2 (time; pre-ABM, post-ABM) X 2 (group; HC, SAD) X 3 (ABM condition; attend threat, attend positive, control) mixed ANOVA with time as a within-subject variable, group and condition as between-subject variables, and attention bias score as the dependent variable.

Screen Shot 2016-09-01 at 10.31.50 AM

Table 2. ABM Attention Bias Scores. Pre- and Post- ABM attention bias scores and standard deviations (in parentheses) for all groups and conditions are shown. Mean biases are in milliseconds and a negative bias indicates a bias towards threat.

Results

Self-Report Measures

Participants were similar across condition and group in age [F(2,79) = 0.859, p = .428] and gender [F(2,78) = 0.205, p = .815]. Participants were also similar within group across ABM conditions on all self-report measures: SIAS [F(2,79) = 0.282, p = .755], LSAS [F(2,79) = 2.105, p = .129], and BDI [F(2,79) = 0.142, p = .868]. There were significant differences between the HC and SAD groups on SIAS [F(1,79) = 356.4, p < .001], LSAS [F(1,79) = 365.5, p < .001], and BDI [F(1,79) = 49.76, p < .001], with SAD subjects reporting significantly more symptoms on all measures than HC . Demographics and self-report measures are summarized in Table 1.

Screen Shot 2016-09-01 at 10.19.48 AM

Table 1. Group Demographics. Mean demographics and standard deviations (in parentheses) at the baseline assessment time point. A score of >38 on the SIAS and >55 on the LSAS indicate clinically significant SAD.

 

Attention Biases to Affective Faces

Mean biases and standard deviations for all groups and conditions are shown in Table 2. The average pre-ABM attention bias was 0.856ms (standard deviation: 32.7ms) for HC and -6.05ms (standard deviation: 32.4ms) for SAD. A negative bias indicates a bias towards threat, with a score of zero indicating no bias in either direction. One-sample t-tests confirmed that SAD and HC subjects’ bias scores did not differ significantly from zero, suggesting no attention bias prior to training in either group (p > .05). Attention bias did not change as a function of group or condition: no main effects or interactions were significant (p > .05). A one-sample t-test confirmed that SAD and HC subjects’ bias scores did not differ significantly from zero after ABM, suggesting no attentional bias prior to or after ABM, regardless of condition (p > .05).

Screen Shot 2016-09-01 at 10.19.59 AM

Table 2. ABM Attention Bias Scores. Pre- and Post- ABM attention bias scores and standard deviations (in parentheses) for all groups and conditions are shown. Mean biases are in milliseconds and a negative bias indicates a bias towards threat.

The Effect of Carry-Over on Reaction Times

 There was a significant effect of time [F(1,77) = 33.41, p < .001] but not of trial type [F(3,75) = .378, p = .798] on carry-over effects between groups and ABM conditions over time. There was also no interaction between time, trial type, and either group [F(3,75) = 0.876, p = .457] or condition [F(6,150) = 1.058, p = .391]. This demonstrates that participants’ responses became faster throughout the training independent of trial type, group, or condition. No other main effects or interactions were significant (p > .05).   

Discussion

The purpose of this study was to determine why ABM alters attention bias in some cases but not in others, with specific focus on carry-over effects, in an effort to better understand inconsistencies in the ABM literature. To our knowledge, this is the first study examining carry-over effects in a modified dot-probe paradigm. Our findings did not replicate previous reports that ABM alters attention bias (Li et al., 2008; Amir et al., 2009; White et al., 2011). However, there are numerous possible interpretations of these findings, including: 1) congruent and incongruent trials are measuring different components of attention; 2) two types of SAD biases exist, which impact response to training; and 3) trial order randomization may have impacted carry-over effects. Because of these possibilities, future studies should consider analyzing congruent and incongruent trials separately, as well as matching individual participants to different training protocols based on pre-training bias scores. 

The majority of previous literature reports that participants with SAD have an attention bias towards threat, compared to positive faces, and HC show little to no attention bias either towards or away from threat faces (Bogels & Mansell, 2004). This study did not support these findings. The results of this study are consistent with the findings of Boettcher et al. (2013), who also report that SAD subjects’ bias scores did not differ significantly from zero and that neither type of ABM, either towards or away from threat, was able to modify attention bias. Despite the lack of significant ABM effects, we did obtain the expected effect of time, such that participants’ responses became faster from pre- to post- training. If carry-over effects were detected, we would have expected to see differences in the average reaction time for trial types ending in the same trial (e.g., threat to positive and positive to positive would differ). Since these differences were not present in our data, carry-over effects were not detected, indicating that carry-over effects were not present in this dot-probe paradigm. However, there may have been other contributing or conflicting factors.

One possible factor influencing our findings is that congruent and incongruent trials measure different components of attention. Carry-over effects in the emotional Stroop task allude to these two components of attention: orienting (to initially focus one’s attention) and maintenance (to maintain that focus; Waters et al., 2005). Similar to Stroop, it is possible that in the dot-probe task congruent trials measure maintenance of attention, while incongruent trials measure orienting attention. Dot-probe task data from subjects who display a bias in one direction should be analyzed in order to determine if different components of attention are being measured. This would also help determine how an initial attention bias in SAD affects the outcome of ABM and how initial bias can be utilized to improve current treatment. Future studies should aim to better understand how different components of attention impact performance on the dot-probe task. 

A bias must exist in order to use carry-over effects to elucidate whether dot-probe measures two components or one component of attention. Because subjects in this study did not initially show a bias, their bias was not shifted as a result of ABM (O’Toole & Dennis, 2012). Thus, we were not able to further explore relationships between the different components of attention and trial types on the dot-probe task.

It is also possible that two types of SAD biases exist. There is some evidence in the literature that SAD subjects can be either “attenders” or “avoiders” of threat (O’Toole & Dennis, 2012). This means that some subjects with SAD have a bias toward threat while others have a bias away from threat. Most studies (Amir et al., 2008; White et al., 2011; Heeren et al., 2011) assume that all SAD subjects have the same bias. This would explain why some studies show a bias while others do not, resulting in significant variability within groups and making it difficult to detect effects of ABM. In our study, we did not have enough participants to properly investigate this theory, but it is possible that individual differences in attention bias interact with training type, resulting in certain people benefitting more from specific types of training. In order to optimize training effects, it is important to consider interactions between initial bias and training condition. Heeren et al. (2011) and Klumpp and Amir (2010) state that the process of disengaging attention may be the mechanism underlying ABM’s effectiveness. Therefore, to facilitate the most disengagement, it may be important to match initial bias with training condition. An attender (someone with a bias towards threat) may benefit most from the attention towards positive condition which requires them to disengage from the threat face, whereas an avoider (someone with a bias away from threat) may benefit most from training towards threat. Separating these two groups and training them differently may be the key to providing the ideal treatment for SAD.

The absence of carry-over effects could also be explained by the trial randomization procedure we used in the dot-probe paradigm, which was intended to eliminate carry-over effects.

Limitations

When interpreting these findings, there are a number of limitations to consider. First, this was a pilot study aiming to investigate effects of ABM on SAD symptoms. Therefore, the null findings may be due to lack of power. Statistical power analysis suggests that our sample size of approximately 14 participants per cell yields low to moderate power (65%) to detect differences between groups and ABM conditions. Another limitation stems from the intervention length. Although participants spent about 40 minutes completing 750 ABM trials, only a single session was completed. To properly simulate a future treatment, participants should engage in multiple sessions over a longer period of time. It is possible that additional sessions are necessary in order to produce alterations in attention bias (Li et al., 2008). Finally, a possible motivation limitation exists. Patients receiving a treatment, compared to paid participants, may be more motivated and vigilant during training and thus should produce more robust results. In order to ameliorate some of these limitations, future studies should aim to further investigate the role carry-over effects play in ABM in larger samples with additional assessment tools. 

Conclusion

SAD is a debilitating disorder which leads to significant impairment in work and social situations. As a result of ongoing research in this field, computer-based ABM is a possible alternative to current SAD treatments. Although further research is needed to make this type of ABM effective, a great deal is being learned about attention and SAD through this paradigm. Ample evidence suggest that attention bias plays an integral role in SAD and its treatment (Amir et al., 2009; Frewen, Dozois, Joanisse, & Neufeld, 2008). Consequently, we can begin manipulating treatment to best serve patients. In an effort to further refine computer-based ABM protocols, this study examined carry-over effects as a possible explanation for inconsistent findings in the existing literature. Based on the findings of this and other studies, future studies should consider analyzing congruent and incongruent trials separately as well as grouping participants based on pre-training attention bias. Since ABM is more cost effective and easier to distribute than any current therapy for SAD, it has the potential to relieve many SAD patients of their symptoms. 

Acknowledgements

All of the data collection and analysis took place in Dr. Israel Liberzon’s Human Neuroimaging Lab at the University of Michigan.  I would like to thank Elizabeth Duval, Ph.D. for allowing me to work on her study, use the results in my analysis, and for reviewing and editing this paper. Her time and effort in teaching me about ABM and how to analyze and discuss the data is greatly appreciated.

I would also like to thank those who assisted in this research and subsequent paper: Daniel Sheridan Ph.D., RN, FAAN for reviewing and editing this paper; Sonalee Joshi for reviewing and editing this paper, data collection, and scoring; and Kelsey Krus for data collection and scoring. This research and my time at the University of Michigan Psychiatry Department would not be possible without contributions from the Undergraduate Research Opportunities Program (UROP) and Biomedical & Life Sciences Summer Fellowship donors.

References

Aguirre, G. K. (2007). Continuous carry-over designs for fMRI. NeuroImage, 35(4), 1480–1494. doi:10.1016/j.neuroimage.2007.02.005

Amir, N., Weber, G., Beard, C., Bomyea, J., & Taylor, C. T. (2008). The effect of a single-session attention modification program on response to a public-speaking challenge in socially anxious individuals. Journal of Abnormal Psychology, 117(4), 860-868. doi:10.1037/a0013445

Amir, N., Beard, C., Taylor, C. T., Klumpp, H., Elias, J., Burns, M., & Chen, X. (2009). Attention training in individuals with generalized social phobia: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 77(5), 961–973. doi:10.1037/a0016685

American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders, (5th ed.). Washington, DC: American Psychiatric Association.

Beck, A.T., Steer, R.A., Ball, R., & Ranieri, W.F. (1996). Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. Journal of Personality Assessment, 67(3) 588-597. doi:10.1207/s15327752jpa6703_13

Boettcher, J., Leek, L., Matson, L., Holmes, E. A., Browning, M., MacLeod, C., … Carlbring, P. (2013). Internet-based attention bias modification for social anxiety: A randomised controlled comparison of training towards negative and training towards positive cues. PLoS ONE, 8(9). doi:10.1371/journal.pone.0071760

Bogels, S. M., & Mansell, W. (2004). Attention processes in the maintenance and treatment of social phobia: Hypervigilance, avoidance and self focused attention. Clinical Psychology Review, 24, 827-856. doi:10.1016/j.cpr.2004.06.005

Bradley, B. P., Mogg, K., Millar, N., Bonham-Carter, C., Fergusson, E., Jenkins, J., & Parr, M. (1997). Attentional biases for emotional faces. Cognition & Emotion, 11(1), 25–42. doi:10.1080/026999397380014

Brown, E. J., Turvosky, J., Heimburg, R. G., Juster, H. R., Brown, T. A., & Barlow, D. H. (1997). Validation of the social interaction anxiety scale and the social phobia scale across anxiety disorders. Psychological Assessment, 9(1), 21-27. doi:10.1037/1040-3590.9.1.21

Carlbring, P., Apelstrand, M., Sehlin, H., Amir, N., Rousseau, A., Hofmann, S. G., & Andersson, G. (2012). Internet-delivered attention bias modification training in individuals with social anxiety disorder – a double blind randomized controlled trial. BMC Psychiatry, 12(1), 66. doi:10.1186/1471-244X-12-66

Cisler, J. M., Bacon, A. K., & Williams, N. L. (2009). Phenomenological characteristics of attentional biases towards threat: A critical review. Cognitive Therapy and Research, 33(2), 221–234. doi:10.1007/s10608-007-9161-y

Ekman, P., & Friesen, W. V. (1976). Pictures of Facial Affect. Consulting Psychologists Press, Palo Alto, CA.

Frewen, P. A., Dozois, D. J. A., Joanisse, M. F., & Neufeld, R. W. J. (2008). Selective attention to threat versus reward: Meta-analysis and neural-network modeling of the dot-probe task. Clinical Psychology Review 28, 307–337. doi:10.1016/j.cpr.2007.05.006

Harrison, L. K., & Turpin, G. (2003). Implicit memory bias and trait anxiety: A psychophysiological analysis. Biological Psychology, 62(2), 97–114. doi:10.1016/S0301-0511(02)00129-1.

Heeren, A., Lievens, L., & Philippot, P. (2011). How does attention training work in social phobia: Disengagement from threat or re-engagement to non-threat? Journal of Anxiety Disorders, 25(8), 1108–1115. doi:10.1016/j.janxdis.2011.08.001

Holle, C., Neely, J., & Heimberg, R. (1997). Blocked versus random presentation and semantic relatedness of stimulus words on response. Cognitive Therapy and Research, 21(6), 681–697. doi:10.1023/A:1021860324879

Izetelny, Adam. (2006). Attentional biases for negative and positive attachment words associated with individual differences in adult attachment and defensiveness: Effects of stimulus content, intensity, and valence. Carbondale, IL: ProQuest.

Jones-Chester, M. H., Monsell, S., & Cooper, P. J. (1998). The disordered-salient Stroop effect as a measure of psychopathology in eating disorders. International Journal of Eating Disorders, 24, 65-82. doi:10.1002/(SICI)1098-108X(199807)24:1<65::AID-EAT6>3.0.CO;2-Z

Klumpp, H., & Amir, N. (2010). Preliminary study of attention training to threat and neutral faces on anxious reactivity to a social stressor in social anxiety. Cognitive Therapy and Research, 34(3), 263–271. doi:10.1007/s10608-009-9251-0

Li, S., Tan, J., Qian, M., & Liu, X. (2008). Continual training of attentional bias in social anxiety. Behaviour Research and Therapy, 46(8), 905–912. doi:10.1016/j.brat.2008.04.005

Liebowitz, M. R. (1987). Social phobia. Modern Problems in Pharmacopsychiatry, 22, 141-173.

Nonyane, B. A., & Theobald, C. M. (2007). Design sequences for sensory studies: achieving balance for carry-over and position effects. The British Journal of Mathematical and Statistical Psychology, 60(Pt 2), 339–349. doi:10.1348/000711006X114568

O’Toole, L., & Dennis, T. A. (2012). Attention training and the threat bias: An ERP study. Brain and Cognition, 78(1), 63–73. doi:10.1016/j.bandc.2011.10.007

Pittig, A., Pawlikowski, M., Craske, M. G., & Alpers, G. W. (2014). Avoidant decision making in social anxiety: The interaction of angry faces and emotional responses. Frontiers in Psychology, 5(September), 1050. doi:10.3389/fpsyg.2014.01050

Ruscio, A. M., Brown, T. A., Chiu, W. T., Sareen, J., Stein, M. B., & Kessler, R. C. (2008). Social fears and social phobia in the USA: Results from the National Comorbidity Survey Replication. Psychological Medicine, 38, 15-28. doi:10.1017/S0033291707001699

Sheehan, D., Lecrubier, Y., Harnett-Sheehan, K., Amoriam, P., Janavs, J., Weiller, E., et al. (1998). The mini international neuropsychiatric interview (MINI): The development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. The Journal of Clinical Psychiatry, 59 Suppl 20, 22-33;quiz 34-57.

Waters, A. J., & Feyerabend, C. (2000). Determinants and effects of attentional bias in smokers. Psychology Of Addictive Behaviors, 14(2), 111-120. doi:10.1037/0893-164X.14.2.111

Waters, A. J., Sayette, M. a., Franken, I. H. a, & Schwartz, J. E. (2005). Generalizability of carry-over effects in the emotional Stroop task. Behaviour Research and Therapy, 43(6), 715–732. doi:10.1016/j.brat.2004.06.003

White, L. K., Suway, J. G., Pine, D. S., Bar-Haim, Y., & Fox, N. a. (2011). Cascading effects: The influence of attention bias to threat on the interpretation of ambiguous information. Behaviour Research and Therapy, 49(4), 244–251. doi:10.1016/j.brat.2011.01.004

Access Full PDF: Exploring Carry-Over Effects to Elucidate Attention Bias Modification’s Mixed Results


Mutational Analysis of Transcriptional Initiation in Bacteria

doi:10.22186/jyi.31.3.1-8

Abstract | Introduction | Methods | Results | Discussion | Conclusions |Acknowledgements | 
References |PDF

Abstract

Gene expression in bacteria begins with transcription, the process by which the base sequence of DNA is copied into RNA. Promoters contain the DNA structural and chemical information for the initiation of transcription by RNA polymerase. Many bacterial promoters have conserved DNA sequences for this purpose called the -35 region and the -10 region. We used a new system for promoter research called pClone Red to carry out mutational analysis of the -35 region of a high strength promoter. We constructed a library of 4,096 mutant promoters by varying the six bases in the -35 region of the Ptac promoter. We isolated and sequenced 81 functional promoters from the library, and measured the strength of transcription initiation for each of them using a red fluorescent protein reporter gene.  We developed a weighted consensus method to determine a pClone -35 consensus sequence formula for the 81 promoters. Consensus sequence variants functioned with a strength similar to that of the published natural consensus for the -35 region and functioned in two other promoters with different sequence contexts. Our results contribute to a basic research understanding of transcriptional initiation in bacteria, and our set of 81 promoters with varying strengths provide new tools for applied research in synthetic biology and biotechnology.

Introduction

Gene expression is the process by which gene information is used to direct the function of cells. It is regulated in all cells because not all genes are required all the time or under all circumstances. For example, human brain cells need to express certain genes that are not needed in muscle cells, and vice versa (Gurdon & Melton, 2008). In a similar sense, bacteria must express different genes depending on temperature, pH, osmotic pressure, or the availability of food (Beales, 2004). Knowledge of gene regulation is important for understanding the differentiation and function of eukaryotic cells, the development of tissues in multicellular organisms, and the relationships of bacteria to their environments throughout the biosphere. It helps us to understand genetic diseases, diseases impacted by genetics, and cellular disorders such as cancer (López-Bigas & Ouzounis, 2004). Gene regulation research can also be used to explore the contribution of bacterial microbiomes to human health and disease (Cho & Blaser, 2012). In the context of synthetic biology, research results on the regulation of gene expression are being applied to engineer bacterial cells that produce pharmaceuticals, attack cancer cells, neutralize environmental pollutants, and synthesize biofuels (Khalil & Collins, 2010).

Gene expression begins with transcription, the process by which DNA information, in the form of the base sequence of a gene, is converted into RNA base sequence information. For genes that encode proteins, the RNA product of transcription is used during translation to encode the sequence of amino acids in a protein. As the first step in gene expression, transcription is an important control point for gene regulation. Initiation of transcription in bacteria involves binding of an enzyme called RNA polymerase to a sequence of DNA called a transcriptional promoter. As illustrated in Figure 1, a common form of bacterial promoters includes two conserved sequence elements, a -35 region that is recognized during transcriptional initiation by RNA polymerase and its associated Sigma factor, and a -10 region that is involved in DNA melting (Ross, Aiyar, Salomon, & Gourse, 1998). The consensus sequence for the -35 region of E. coli promoters has been widely reported to be TTGACA (Harley & Reynolds, 1987; Lisser & Margalit, 1993). The consensus sequence of the -10 region is TATAAT (Waterman, Arratia, & Galas, 1984). The RNA polymerase attaches itself to one of the two DNA strands referred to as the template strand and begins to use it to make RNA. The RNA polymerase proceeds to slide along the template strand for the entire length of the gene, reading it in a 3’ to 5’ direction. Transcription ceases when the RNA polymerase encounters a transcriptional terminator. In bacteria, the resulting RNA transcript is used for translation as soon as it is available.

2 490-3153-1-fig1

Figure 1. Initiation of bacterial transcription. The promoter shown is Ptac. The arrow indicates mutation of each of the six bases (shown as Ns) in the Ptac -35 region, producing a library of 4,096 sequence variants.

Mutational analysis of the -35 region of naturally occurring promoters showed that the -35 region is involved in the initial binding of the RNA polymerase to the promoter, and that it is an important contributor to the overall strength of a bacterial promoter. An in vitro study of the effect of mutations in the -35 region on the binding affinity of RNA polymerase and the rate of open promoter complex formation demonstrated that the consensus sequence of TTGACA exhibited the highest values for both parameters (Kobayashi, Ishihara & Nagata, 1990). An in vivo mutational analysis revealed that the strength of a promoter was halved by deletions that replace the -35 region with unrelated sequences (Bingham, Busby, Ponnambalam & Webster,  1986).

We chose to investigate the effect of mutations in the -35 region by 1) building a library of all 4,096 variants of the six base -35 region, 2) randomly selecting promoters from the library, 3) measuring the strengths of the promoters, 4) determining the DNA sequence of the promoters, and 5) compiling a -35 region consensus sequence from the promoters. For our study, we chose the Ptac bacterial promoter, which is a high-strength hybrid promoter derived from the Ptrp and Plac promoters (DeBoer, 1983; Figure 1). The high-strength property allowed a wide range of lower efficiencies to be generated. The sequence of the Ptac promoter is shown in Figure 1. The sequence of the -35 region is identical to the -35 region consensus of TTGACA. The method by which we introduced mutations into Ptac involves a system recently developed by our research group (Campbell et al., 2014). Figure 2 illustrates the structure of our pClone Red plasmid for cloning and measuring bacterial promoters. The pClone Red plasmid has a backwards facing promoter that causes expression of Green Fluorescent Protein (GFP). A new promoter is cloned into pClone Red by Golden Gate Assembly, a new cloning method that employs DNA ligase and type IIs restriction endonucleases such as BsaI to conduct restriction digestion and ligation at the same time for seamless connection of DNA parts (Engler, Gruetzner, Marillonnet, Weber, & Werner, 2011). A new promoter cloned into pClone Red faces forward and expresses Red Fluorescent Protein (RFP). The amount of RFP expressed depends on the strength of the new promoter.

3 490-3154-1-fig2

Figure 2. Cloning mutant promoters with pClone Red. Mutant promoters in the form ofannealed promoter oligonucleotides can be cloned into pClone Red with Golden Gate Assembly (GGA) using BsaI and DNA ligase.In the resulting GGA products, the strength of the mutant promoters canbe measured by RFP production.

We used pClone Red to produce a library of mutant promoters by varying each of the six bases of the -35 region of the Ptac promoter. We collected 81 different clones with varying intensities of the RFP reporter to quantify the strength of transcription initiation for each one. Our results add to the understanding of transcriptional initiation in bacteria. They suggest that promoter sequence variation in the -35 region affects sigma factor binding. Our consensus sequence is an alternative testable hypothesis about the expected strengths of bacterial promoters. The 81 promoter variants have various strengths that can be used to build complicated genetic circuits that require specific ratios of gene expression. These genetic circuits have many applications in energy, medicine, bioremediation, or biotechnology.

Materials and Methods

We introduced mutations into the Ptac -35 region by designing DNA oligonucleotides (Integrated DNA Technologies, Inc., Coralville, Iowa). We used six Ns in the place of the TTGACA sequence that comprises the -35 region of the Ptac promoter. The sequence of the top strand oligonucleotide was

(5’-CGACGAGCTGNNNNNNATTAATCATCGGCTCGTATAATGTGTGGA-3’)

and the sequence of the bottom strand oligonucleotide was

(5’-CCGCTCCACACATTATACGAGCCGATGATTAATNNNNNNCAGCTC-3’).

For use in Golden Gate Assembly (GGA), the top strand oligonucleotide starts with a 5’ CGAC sticky-end and the bottom strand begins with a 5’ CCGC sticky-end. Prior to GGA, we annealed the top and bottom strand oligonucleotides at 5µM in the annealing reaction. We prepared a 20µL annealing reaction using 16µL of H2O, 2µL of 10X annealing buffer (1M NaCl, 100mM Tris-HCl pH 7.4), 1µL top strand promoter oligonucleotide, and 1µL bottom strand promoter oligonucleotide. We boiled the annealing reaction in 400mL of water for 4min and allowed it to cool slowly overnight in the same water bath.

The annealed oligonucleotides were diluted with water to the same concentration as the pClone Red destination vector (Campbell et al., 2014; http://parts.igem.org/Part:BBa_J119137 in the Registry of Standard Biological Parts) to provide a 1:1 molar ratio of promoter insert to pClone vector in the GGA reaction. For GGA, we use a vector concentration of 40nM (60ng/µL of pClone Red) and diluted the annealed and cooled oligonucleotides to 40nM. We diluted the 100µM oligonucleotides to 5µM (5000nM) in the annealing reaction and further diluted the annealed oligos 125-fold (5000nM/40nM) by adding 124µL of H2O to 1µL of the cooled oligonucleotides. We performed GGA reactions in a total volume of 10µL containing 6µL H2O, 1µL 10X T4 DNA Ligase Buffer (supplied by New England BioLabs, Ipswich, Massachusetts as 300mM Tris-HCl pH 7.8 at 25°C, 100mM MgCl2, 100mM DTT and 10mM ATP), 1µL of 40 nM pClone Red destination vector, 1µL of 40nM annealed promoter oligonucleotides, 0.5 µL HF (high fidelity; New England BioLabs, Ipswich, Massachusetts) BsaI (10 units), and 0.5µL T4 DNA Ligase (New England BioLabs, Ipswich, Massachusetts; 50 Weiss units). We placed the GGA reactions in a thermal cycler set for 20 cycles of 1min at 37°C followed by 1min at 16°C. The 37°C temperature favors digestion by BsaI-HF while the 16°C favors ligation by T4 DNA ligase. The GGA reactions ended with a 15min incubation at 37°C to cleave the remaining BsaI sites. We transformed the completed GGA reactions into Z-competent E. coli JM109 cells (Zymo Research, Irvine, California) by adding competent cells to the completed GGA reaction, placing the mixture on ice for 5min, and pipetting it directly onto a LB agar plate with 50µg/mL ampicillin. We incubated the plates overnight at 37°C.

Library Exploration

We photographed pClone mutant promoter library plates to document the colors of the colonies and chose those that had a wide range of RFP expression due to different promoter strengths. We transferred 81 colonies to LB + ampicillin liquid media with a sterile toothpick for overnight growth. We made a glycerol stock for each of them using 300µL sterile glycerol and 700µL of overnight bacterial culture. We determined the strength of the promoters by quantifying the RFP output of clones with a fluorometer (Bioteck, Winooski, Vermont) set for 585nm excitation and 615nm emission. We measured cell density with absorbance at 600nm and expressed RFP fluorescence as a ratio with cell density values in the denominator. We isolated plasmid DNA from 8mL cultures of each clone grown overnight in LB with ampicillin using the miniprep kit from Zymo Research, Irvine, California. We sent miniprep aliquots for Sanger DNA sequencing to the Iowa State University DNA Facility, Ames, Iowa.

Consensus Sequence Determination and Testing

Analysis of the sequencing data led to the development of a consensus sequence, called the pClone -35 Consensus. The consensus sequence formula of GA(T/C)A(A/T)A includes the four possible sequence variants GATAAA, GATATA, GACAAA, and GACATA. To test their functions, we incorporated them into the sequence of Ptac and cloned them into pClone Red. We sought to test the effect of promoter sequences surrounding the -35 region by embedding the four consensus sequence variants into two new promoter contexts that are very different from Ptac and from each other. As a starting point for designing the two new promoters, we used a consensus sequence developed from a mutational analysis of the sequences before and after the -35 region, reported as

(5’- WTWNWNNNNAWWWNNTTCTTGACATNTTNNNNNNNWTRTGRTATAATDNNNNNRTWW -3’)

(De Mey, Lequeux, Maertens, Soetaert, & Vandamme, 2007a). Our first set of rules to specify the bases was W=A, N=G, R=G, and D=G, while avoiding runs of 4 or more Gs or Cs. This resulted in Psimp1, whose sequence is

(5’- CGACGAGGCCGAAAAGGTTCTTGACATGTTGGCGGCGATGTGGTATAATGGCGGCGTAA -3’).

The second set of rules we applied was W=T, N=C, R=A, and D=C resulting in Psimp2, with the sequence

(5’- CGACCTCCGCATTTCCTTCTTGACATCTTCCGCCGCTTATGATATAATCCGCCCATTT- 3’).

We cloned the four pClone -35 consensus sequence variants GATAAA, GATATA, GACAAA, and GACATA, along with the original Ptac consensus of TTGACA, into Psimp1 and Psimp2 to produce a collection of ten new promoter variants. We quantified the strengths of the promoter variants by using RFP expression and compared the mean values for each of them with an unpaired t-test using a threshold for statistical significance of p = .05.

Results

Mutant Promoter Library Construction and Exploration

Our primary research goal was to explore the effects of mutations in the -35 region of a bacterial promoter. We mutated each of the six bases in this region, producing a library of 46 or 4,096 mutant promoters. Figure 3 shows a picture of the plated library of clones on a UV light box. Unsuccessful GGA resulted in colonies that expressed the GFP gene, driven by the original backwards-facing promoter of the pClone Red plasmid. Successful GGA produced all of the colonies that do not show GFP expression, but have varying levels of RFP expression. As expected, mutations in the -35 region of the Ptac promoter yielded diverse promoter strengths, causing the varying RFP brightness of E. coli colonies on the plate.

4 490-3155-1-fig3

Figure 3. Library of mutant promoters. The -35 region of the Ptac promoter (TTGACA) was replaced with the sequence NNNNNN(N=A,T,C,G) to produce a library of 4,096 mutant promoters in pClone Red. Green colonies indicate failed GGA. Non-green colonies have variable RFP levels as determined by promoter strength.

Our collection of 81 promoters with their -35 region sequences is listed in Figure 4, along with their strengths, expressed as a ratio of the measured RFP production of each clone to that of Ptac. The 81 promoters covered a wide range of strengths, from 128% of the strength of Ptac to 0.9%. The mean RFP expression ratio of the 81 clones was 19.7 +/- 28.3, and the median RFP expression ratio was 7.3. Of the 81 clones, three have a promoter strength higher than Ptac, 12 have a strength at least 50% of Ptac, and 44 have a promoter strength less than 10% of Ptac.

5 490-3156-1-fig4

Figure 4. Collection of 81 mutant promoters. Eighty-one mutant promoter clones were selected from the pClone Red library. For each mutant promoter clone, the sequence of the -35 region is listed, along with its strength, expressed as the ratio of RFP expression compared toPtac.

Consensus Sequence Determination

Our examination of the sequences of the -35 regions from the 81 promoters listed in Figure 4 revealed patterns of bases required for high promoter strength. We captured these patterns by determining a consensus sequence. Consensus sequences are usually determined by analyzing the top 5-20% of the promoters (Estrem, Gaal, Gourse, & Ross, 1998). We used this method to determine a consensus sequence from the 12 mutant promoters that had a strength of 50% or higher compared to Ptac. The resulting Preliminary pClone -35 Consensus is shown in Figure 5B. It differs considerably from the published -35 consensus shown in Figure 5A, which was derived from 263 E. coli promoters (Harley & Reynolds, 1987). Specifically, the first, second, third, and fifth bases differ between the two consensus sequences.

Eckdahl and Eckdahl JYI Revised Figure 5

Figure 5. Consensus sequences of the -35 region. (A) Published -35 consensus based on 263 naturally occurring E. coli promoters (Harley & Reynolds, 1987); (B) Preliminary pClone -35 consensus was determined from the the top 12 mutant promoters with >50% efficiency compared to Ptac; (C) pClone -35 consensus was determined from sequences found in all 81 selected promoters. using a weighted consensus method.

To use the information collected for all of the 81 promoter sequences, we developed a new method of consensus building. Our weighted consensus method allows each base in each position to make contributions to the consensus that are proportional to the strength of the promoter in which it is found. Our method assigns a score to each base in each position based on the RFP expression of the mutant promoter that contained it. For example, for mutant promoter number 65, which had a promoter strength of 128, the G in the first position received 128 points for that promoter. The T in the second position of promoter number 65 also received 128 points, as did the bases in the remaining four positions. For promoter number 50, which had a promoter strength of 118.9, the G in the first position received 118.9 points, as did the C in the second position and the other bases in their respective positions.

We totaled the points received by each of the four bases for all six positions and used the totals to produce a consensus sequence. For the first position, the total for G, T, C, and A were 1008, 512, 84, and 76, respectively. The weighted consensus method resulted in the pClone -35 Consensus shown in Figure 5C. The pClone -35 Consensus sequence is similar to the Preliminary pClone -35 Consensus sequence but differs significantly from the Published -35 Consensus sequence.

Testing the pClone Consensus Sequence

Because our synthetic pClone -35 consensus sequence differed from the natural published -35 consensus sequence, we designed experiments to test it. We used the pClone -35 Consensus presented in Figure 5C to develop the consensus formula of GA(T/C)A(A/T)A. This consensus formula has the four consensus sequence variants of GATAAA, GATATA, GACAAA, and GACATA. We incorporated these four variants into the original pTac promoter for testing. In order to test the effect of promoter sequences surrounding the -35 region, we also incorporated the four consensus sequence variants into two additional promoters designed on the basis of a structural blueprint derived from mutational analysis in E. coli of the sequences before and after the -35 region (De Mey et al., 2007a). As described in Materials and Methods, we used the consensus sequence from (De Mey et al., 2007) to design two new promoters called Psimp1 and Psimp2. Figure 6 shows the RFP results after cloning the original Ptac -35 region sequence of TTGACA and the four pClone -35 consensus sequence variants GATAAA, GATATA, GACAAA, and GACATA into Ptac, Psimp1, and Psimp2. In the context of Ptac, the four consensus sequence variants have promoter strengths relative to Ptac that range from 21.6 to 167.1. The mean strength of the four sequence variants in Ptac was 99.0 +/- 66.4. In the context of Psimp1, the four variants had a mean promoter strength of 85.8 +/- 63.9. An unpaired t-test showed that the difference between this mean and the mean for Ptac is not statistically significant (p = .785). The range of the efficiencies in Psimp1 was 28.6 to 174.6. In Psimp2, the mean promoter strength of the four variants was 20.0 +/- 8.5. This mean is lower than the mean for Ptac, but an unpaired t-test showed that the difference is not statistically significant (p = .056). The range for Psimp2 was from 10.9 to 27.6. When we consider the -35 region consensus variants cloned into each promoter as populations, an unpaired t-test fails to reveal a significant statistical difference among the three promoter contexts.

7 490-3158-1-fig6

Figure 6. Promoter strengths of pClone -35 consensus variants in three sequence contexts. The pClone Red clones shown are the original Ptac -35 region sequence (TTGACA) plus four variants of the pClone -35 Consensus (G A AA) in the three contexts of Ptac, Psimp1, and Psimp2. The pictures were taken on a UV box to shown RFP expression. The numbers below each picture are the promoter strengths, expressed as the ratio of RFP expression compared to Ptac.

The alternative approach of considering each consensus variant in the three promoter contexts captures important differences among them. For instance, GATAAA produces a strength of 21.6 and 26.9 in Ptac and Psimp2, respectively, but a higher strength of 87.2 in Psimp1. GACAAA produces a strength of 167.1 and 174.6 in Ptac and Psimp1, respectively, but only 10.9 in Psimp2. The order of the sequence variants was different for each of the promoters. For Ptac, we found that GACAAA > GACATA > GATATA > GATAAA. It was GACAAA > GATAAA > GATATA > GACATA for Psimp1, and lastly, GACACA > GATAAA > GATATA > GACAAA for Psimp2. These results suggest that bacterial promoter strength is a function of the sequence of the -35 region and the sequence context surrounding it.

Discussion

Implications for Understanding the Initiation of Bacterial Transcription

Although the core RNA polymerase with its five subunits is responsible for the production of RNA during transcription in E. coli and most bacteria, initiation of transcription requires its association with a subunit called the sigma factor (Gross et al., 1998). The primary sigma factor, sigma 70, initiates transcription of genes needed during exponential growth. The sigma factor enables binding of the RNA polymerase to the -10 region and -35 region of promoters. A conserved domain of sigma factor proteins, region 4.2, is a helix-turn-helix structural motif that contains amino acids that make contact with bases and phosphates on both strands of the -35 region (Gruber & Gross, 2003). When the nascent RNA is five to ten nucleotides long, the sigma factor dissociates from the RNA polymerase. The core RNA polymerase continues with the production of an RNA transcript in the absence of sigma factor. Initiation frequency is responsive to the promoter sequence and can vary over three orders of magnitude (Record et al., 1996). The prevailing view of how promoter strength varies in bacterial genomes is that the strength of sigma factor binding to a particular promoter depends on how closely it matches the consensus sequence. For the -35 region, this would be interpreted in terms of matches to the consensus sequence of TTGACA for E. coli promoters. The variety of -35 region sequences also is likely to be affected by the repeated use of promoter elements during evolution of the E. coli genome. Our study can be viewed as having removed this evolutionary historical bias, revealing the true E. coli consensus sequence by which promoter strength is determined. Removing the historical constraints leads to an alternative testable hypothesis in which the extent to which a particular promoter matches the pClone -35 consensus of GA(T/C)A(A/T)A determines its strength.

E. coli has six sigma factors in addition to sigma 70, that recognize -35 region sequences. Alternative sigma factors enable coordinate transcription from genes needed for responses to heat shock, starvation, and nitrogen limitation (Gross et al., 1998). The seven different sigma subunits in E. coli are used in a global regulatory capacity. Each of the E. coli sigma factors binds to a slightly different -35 region sequence. For example, the consensus sequence is TTGACA for sigma 70, CCCTTGAA for sigma 32, CTGGNA for sigma 54, and CTAAA for sigma F (Gruber et al., 2003). The E. coli genome contains more copies of the TTGACA -35 region sequence, since it is used for housekeeping genes. This biases the frequency distribution of -35 region sequences in favor of TTGACA (Harley & Reynolds, 1987; Lisser & Margalit, 1993). Our experimental approach may have removed the bias for TTGACA and allowed increases in the frequency of binding sites for one or more of the other six sigma factors. The result of alternative sigma factor binding would be a change in the consensus sequence away from TTGACA and toward the sequences required for alternative sigma factor binding. The in vivo binding sites for several alternative sigma factors have not been determined. If the RFP production differences between our pClone -35 consensus and the published one are due to interactions between our mutant promoters and alternative sigma factors, they will provide insight into the binding sites for alternative sigma factors. Since our pClone -35 consensus is GA(T/C)A(A/T)A compared to the sigma 70 binding site of TTGACA, the first, second, third, and sixth positions may be interacting with alternative sigma factors.

Contributing a New Promoter Set to Databases

In synthetic biology, a collection of promoter sequences of varying strengths can be a very valuable asset. To share our collection of promoters with the worldwide synthetic biology community of researchers, we contributed three pages to the Registry of Standard Biological Parts which contains the largest collection of synthetic biology DNA parts in existence, with over 7,000 DNA parts (MIT Working Group, 2005). We contributed a page to the Registry of Standard Biological Parts as part number J119375 (http://parts.igem.org/Part:BBa_J119375). The Psimp1 and Psimp2 promoters with 4 consensus variants each are presented in the Registry as part number J119376 (http://parts.igem.org/Part:BBa_J119376) and J119377 (http://parts.igem.org/Part:BBa_J119377), respectively. We also contributed information about our promoters to a new database called the Registry of Functional Promoters (RFP; http://gcat.davidson.edu/RFP/). RFP is a database that allows high school and undergraduate students to share information about promoters that they have cloned and studied using the pClone Red system. A third database to which we submitted information about the promoters is called the GCAT-alog (http://gcat.davidson.edu/GCATalog/). This database contains information about the freezer location of clones that synthetic biologists can access in order to share DNA parts and devices with each other.

Applications Using Promoters with Varying Strengths

The results of our study provided sequences and promoter strengths for a collection of 81 newly cloned promoters. An additional 19 promoters with changes in their -35 and -10 regions have been part of the Registry of Standard Biological Parts since 2004 (MIT Working Group, 2005). The sequence of each mutant promoter is presented along with relative strengths determined by RFP expression data (http://parts.igem.org/Part:BBa_J23100). There is also a collection of 14 promoters of varying strengths in a paper published in 2011 (Davis, Rubin, & Sauer, 2011). In synthetic biology, users can pick and choose promoters with different strengths for use in a wide range of synthetic biology applications. In the field of medicine, synthetic biology has been used to improve diagnosis and make medical treatments more effective with new drug discovery and drug delivery (Collins & Khalil, 2010). These applications rely on well characterized promoter functionality. The tuning of synthetic biology devices with promoters of various strengths is fundamental to the growing field of biological engineering. Synthetic biology is also being used to develop microbial systems for biofuel production (Deng, Guo, Liu, Tao, & Zhang, 2015). The production of biofuels using microbial cells requires promoters with various strengths to control the transcription of genes that encode enzymes in biofuel metabolic pathways. Metabolic engineers have also repurposed naturally occurring metabolism in order to convert inexpensive starting materials into valuable metabolites (Keasling, 2010). Promoters of various strengths can be used to balance the desired orthogonal metabolic output with the metabolic cost to the bacteria (Eckdahl et al., 2015).

Future Prospects for Mutational Analysis of Bacterial Gene Expression

The pClone Red system facilitates further investigation of the initiation of transcription in bacteria. Our results showing the effects of -35 region sequence context indicate that we do not fully understand the effects of sequences upstream and downstream of the -35 region. Production of pClone Red libraries with variations in nearby sequences would enable the collection of promoter strength data and the development of consensus sequences for the upstream and downstream elements. Our approach of producing a library of mutant promoters by mutation of the -35 region of Ptac could also be adapted to the -10 region of Ptac or another promoter. Comparison of the resulting consensus sequence to the published consensus of TATAAT would add insight into the role of the -10 region in the initiation of transcription. In addition, the pClone Red system could be used to study regulated promoters, in contrast to constitutive promoters such as Ptac. It is also possible for mutations to be introduced into Ptac or other promoters that reduce or increase the distance between the -35 region and the -10 region.

While constitutive promoters initiate transcription at a constant level, regulated promoters respond to internal or external cues. The regulation usually occurs by the binding of a regulatory protein to a DNA sequence found in or near a transcriptional promoter. Repressor regulatory proteins turn off the initiation of transcription while activators turn it on. Each of the two types of regulatory proteins can exist in either of two structural forms, one that binds DNA and one that does not. This mechanism occurs when small molecules bind to the regulatory proteins and affect which structural form they adopt. The pClone Red system could be used to interrogate the sequence requirements for repressor or activator binding or the effect of -35 region or the -10 region sequences on regulation.

We have extended our pClone Red approach to the design and construction of systems for mutational analysis of other aspects of the regulation of bacterial gene expression. The tClone Red plasmid employs GGA to easily clone transcriptional terminators found at the ends of bacterial genes. Terminators function by the formation of base-paired stem and loop structures in the RNA that destabilize its interaction with the DNA template, causing RNA polymerase to terminate transcription (Cambray et al., 2013). Terminators are found upstream of genes as part of riboswitches, which can control gene expression by adopting alternative RNA folding structures. Both terminators and riboswitches are useful in synthetic biology (Batey, Ceres, Garst, & Marcano-Velázquez, 2013). The tClone Red plasmid permits the study of structure/function relationships in terminators and riboswitches. We have also designed and built rClone Red, which allows the study of ribosome binding sequences, or RBSs. RBSs are found about 6 bases upstream of the start codon and control the initiation of translation by base pairing with the 5’ end of the 16S ribosomal RNA of the small ribosomal subunit (Laursen, Sørensen, Mortensen, & Sperling-Petersen, 2005). In addition, riboswitches can use RBSs, adopting a structure that sequesters them or makes them available. Collections of RBSs with various strengths and riboswitches that are controlled by various metabolites are very useful to synthetic biologists (Khalil & Collins, 2010). The rClone Red plasmid allows the production of these collections as well as mutational analysis of both RBSs and riboswitches.

Acknowledgements

We would like to thank Dr. Jay Meyers of Saint Joseph Central High School for his support and Dr. A. Malcolm Campbell of Davidson College for his advice on the experimental approach and helpful comments on the manuscript. Support from National Science Foundation (http://www.nsf.gov/) RUI grant MCB-1329350 to Missouri Western State University is gratefully acknowledged.

References

Beales, N. (2004). Adaptation of Microorganisms to Cold Temperatures, Weak Acid Preservatives, Low pH, and Osmotic Stress: A Review. Comprehensive Reviews in Food Science and Food Safety, 3(1), 1–20. http://doi.org/10.1111/j.1541-4337.2004.tb00057.x

Cambray, G., Guimaraes, J. C., Mutalik, V. K., Lam, C., Mai, Q.-A., Thimmaiah, T., . . . & Endy, D. (2013). Measurement and Modeling of Intrinsic Transcription Terminators. Nucleic Acids Research, 41(9), 5139–5148. http://doi.org/10.1093/nar/gkt163

Campbell, A. M., Eckdahl, T., Cronk, B., Andresen, C., Frederick, P., Huckuntod, S., . . . & Yuan, J. (2014). pClone: Synthetic Biology Tool Makes Promoter Research Accessible to Beginning Biology Students. CBE Life Sciences Education, 13(2), 285–296. http://doi.org/10.1187/cbe.13-09-0189

Ceres, P., Garst, A. D., Marcano-Velázquez, J. G., & Batey, R. T. (2013). Modularity of select riboswitch expression platforms enables facile engineering of novel genetic regulatory devices. ACS Synthetic Biology, 2(8), 463–472. http://doi.org/10.1021/sb4000096

Cho, I., & Blaser, M. J. (2012). The human microbiome: at the interface of health and disease. Nature Reviews Genetics, 13(4), 260–270. http://doi.org/10.1038/nrg3182

Davis, J. H., Rubin, A. J., & Sauer, R. T. (2011). Design, construction and characterization of a set of insulated bacterial promoters. Nucleic Acids Research, 39(3), 1131–1141. http://doi.org/10.1093/nar/gkq810

De Boer, H. A., Comstock, L. J., & Vasser, M. (1983). The tac promoter: A functional hybrid derived from the trp and lac promoters. Proceedings of the National Academy of Sciences of the United States of America, 80(1), 21–25.

De Mey, M., Maertens, J., Lequeux, G. J., Soetaert, W. K., & Vandamme, E. J. (2007a). Construction and model-based analysis of a promoter library for E. coli: an indispensable tool for metabolic engineering. BMC Biotechnology, 7, 34. http://doi.org/10.1186/1472-6750-7-34

De Mey, M., Maertens, J., Lequeux, G. J., Soetaert, W. K., & Vandamme, E. J. (2007b). Construction and model-based analysis of a promoter library for E. coli: an indispensable tool for metabolic engineering. BMC Biotechnology, 7, 34. http://doi.org/10.1186/1472-6750-7-34

Eckdahl, T. T., Campbell, A. M., Heyer, L. J., Poet, J .L., Blauch, D. N., Snyder, . . . Whitesides, E. T. (2015). Programmed evolution for optimization of orthogonal metabolic output in bacteria. PLoS One, 10(2), e0118322. http://doi.org/10.1371/journal.pone.0118322

Estrem, S. T., Gaal, T., Ross, W., & Gourse, R. L. (1998). Identification of an UP element consensus sequence for bacterial promoters. Proceedings of the National Academy of Sciences of the United States of America, 95(17), 9761–9766.

Gross, C. A., Chan, C., Dombroski, A., Gruber, T., Sharp, M., Tupy, J., & Young, B. (1998). The Functional and Regulatory Roles of Sigma Factors in Transcription. Cold Spring Harbor Symposia on Quantitative Biology, 63, 141–156. http://doi.org/10.1101/sqb.1998.63.141

Gruber, T. M., & Gross, C. A. (2003). Multiple Sigma Subunits and the Partitioning of Bacterial Transcription Space. Annual Review of Microbiology, 57(1), 441–466. http://doi.org/10.1146/annurev.micro.57.030502.090913

Gurdon, J. B., & Melton, D. A. (2008). Nuclear Reprogramming in Cells. Science, 322(5909), 1811–1815. http://doi.org/10.1126/science.1160810

Harley, C. B., & Reynolds, R. P. (1987). Analysis of E. coli promoter sequences. Nucleic Acids Research, 15(5), 2343–2361.

Keasling, J. D. (2010). Manufacturing molecules through metabolic engineering. Science, 330(6009), 1355–1358. http://doi.org/10.1126/science.1193990

Khalil, A. S., & Collins, J. J. (2010). Synthetic biology: applications come of age. Nature Reviews Genetics, 11(5), 367–79. http://doi.org/10.1038/nrg2775

Khalil, A. S., & Collins, J. J. (2010). Synthetic biology: applications come of age. Nature Reviews Genetics, 11(5), 367–379. http://doi.org/10.1038/nrg2775

Kobayashi, M., Nagata, K., & Ishihama, A. (1990). Promoter selectivity of Escherichia coli RNA polymerase: effect of base substitutions in the promoter −35 region on promoter strength. Nucleic Acids Research, 18(24), 7367–7372. http://doi.org/10.1093/nar/18.24.7367

Laursen, B. S., Sørensen, H. P., Mortensen, K. K., & Sperling-Petersen, H. U. (2005). Initiation of Protein Synthesis in Bacteria. Microbiology and Molecular Biology Reviews, 69(1), 101–123. http://doi.org/10.1128/MMBR.69.1.101-123.2005

Lisser, S., & Margalit, H. (1993). Compilation of E.coli mRNA promoter sequences. Nucleic Acids Research, 21(7), 1507–1516. http://doi.org/10.1093/nar/21.7.1507

López-Bigas, N., & Ouzounis, C. A. (2004). Genome-wide identification of genes likely to be involved in human genetic disease. Nucleic Acids Research, 32(10), 3108–3114. http://doi.org/10.1093/nar/gkh605

MIT Working Group. (2005). Registry of Standard Biological Parts. Retrieved September 2, 2015, from http://partsregistry.org/Main_Page

Ponnambalam, S., Webster, C., Bingham, A., & Busby, S. (1986). Transcription initiation at the Escherichia coli galactose operon promoters in the absence of the normal -35 region sequences. Journal of Biological Chemistry, 261(34), 16043–16048.

Ross, W., Aiyar, S. E., Salomon, J., & Gourse, R. L. (1998). Escherichia coli Promoters with UP Elements of Different Strengths: Modular Structure of Bacterial Promoters. Journal of Bacteriology, 180(20), 5375–5383.

Tao, H., Guo, D., Zhang, Y., Deng, Z., & Liu, T. (2015). Metabolic engineering of microbes for branched-chain biodiesel production with low-temperature property. Biotechnology for Biofuels, 8, 92. http://doi.org/10.1186/s13068-015-0270-7

Waterman, M. S., Arratia, R., & Galas, D. J. (1984). Pattern recognition in several sequences: Consensus and alignment. Bulletin of Mathematical Biology, 46(4), 515–527. http://doi.org/10.1007/BF02459500

Weber, E., Gruetzner, R., Werner, S., Engler, C., & Marillonnet, S. (2011). Assembly of designer TAL effectors by Golden Gate cloning. PLoS One, 6(5), e19722. http://doi.org/10.1371/journal.pone.0019722

 Access Full PDF: Mutational Analysis of Transcriptional Initiation in Bacteria

A Meta-Analysis of the Effectiveness of Trauma-Focused Cognitive-Behavioral Therapy and Play Therapy for Child Victims of Abuse

Abstract | Introduction | Methods | Results | Discussion | Conclusions |Acknowledgements |
References |PDF

Abstract

Because of their widespread use, trauma-focused cognitive-behavioral therapy (TF-CBT) and play therapy (PT) were selected as treatments to be included in a meta-analytic study examining their effectiveness in aiding victims of child abuse. Studies analyzing the effectiveness of TF-CBT or PT were found using Academic Search Premier, PsycARTICLES, Psychological and Behavioral Sciences Collection, PsychINFO, Google Scholar, and Digital Dissertation. Ten studies (N = 762) were combined into two separate meta-analyses, one for TF-CBT and one for PT. These were coded, and four general outcome categories were identified: externalizing, internalizing, sexual, and parent report. Results showed an overall effect size of d = .212 for TF-CBT and d = .095 for PT. TF-CBT was more effective than PT on all outcome measures, except for externalizing behavior (PT d =.255 and TF-CBT d =.187). These results indicate that both TF-CBT and PT can be beneficial for victims of child abuse, though TF-CBT usually has more positive outcomes. Future studies should use a wait-list control group and random assignment.

Introduction

Child maltreatment is a serious societal problem. In spite of this fact, researchers and practitioners do not agree on a single definition for child maltreatment, or even what the various forms of child maltreatment consist of (Barnett, Perrin, & Perrin, 2011; Dubowitz, Papas, Black, & Starr, 2002). Typically, the phrase “child maltreatment” is used to group the various forms of child abuse and neglect into one convenient term, with the most common forms of child abuse labels being physical, emotional/psychological, neglect, and sexual abuse. Each year millions of children are victims of this societal problem (Barnett et al., 2011; Trickett, Negriff, Ji, & Peckins, 2011).

While there are no agreed on definitions for each of the types of maltreatment there are common themes that categorize each. Child physical abuse (CPA) typically involves two standards: (a) the harm standard and (b) the endangerment standard. The harm standard recognizes that the child is a victim of CPA if the child has observable injuries for at least 48 hours (Barnett et al., 2011). The endangerment standard requires victims to be deemed at a substantial risk for injury or endangerment (Barnett et al., 2011). Emotional or psychological abuse is damaging the child by verbally or emotionally tearing the child down and objectifying the child. This can take many forms such as spurning, isolating, exploiting, rejecting, denying affection, and corrupting the child (Barnett et al., 2011).

Sexual abuse is very common in America, with one in four females and one in seven males being victims of sexual abuse prior to turning 18 (Misurell, Springer, & Tryon, 2011). Child sexual abuse (CSA) may involve fondling, masturbating, penetrating, vaginal/anal intercourse, taking nude pictures of the child, or even giving pornographic materials to a minor. While we will use this as the definition for CSA, it should be noted that there is no universal definition among experts (Barnett et al., 2011).

Child neglect is the most commonly reported form of child maltreatment (Barnett et al., 2011; Dubowitz, et al., 2002). Neglect varies from the other forms of maltreatment because rather than committing an act against a child (as in abuse), an important need of the child is not being met. Dubowitz et al. (2002) identified three fundamental needs which must be met for children: (a) perceived support from mothers, (b) lack of exposure to family conflict, and (c) early affection from mothers. Treatment for child maltreatment is vital because of the many long-term negative effects for many of the victims (e.g., suicidal ideation, sexual acting out, attachment issues, aggression, and many others; Easton, Coohey, O’leary, Zhang, & Hua, 2011; Hetzel-Riggin, Brausch, & Montgomery, 2007; Nolan et al., 2002; Saywitz, Mannarino, Berliner, & Cohen, 2000; Scott, Burlingame, Starling, Porter, & Lilly, 2003).

Types of Therapy

Millions of children each year are victims of these various forms of child maltreatment. As a result, there have been many forms of treatment developed to help victims of child maltreatment overcome or cope with the negative outcomes of that maltreatment. The two most commonly used types of therapies are play therapy (PT) and cognitive behavioral therapy (CBT; Kaduson, 2011; Misurell et al., 2011; Scott et al., 2003).

Play therapy

Play therapy involves mastering traumatic events, a process known as abreaction. Abreaction requires a reliving and mastering of the traumatic event, resulting in catharsis, defined as a release of affect that brings about relief (Nash & Schaefer, 2011).

Mechanics of play therapyPT tends to be a long therapy, lasting around 30 sessions (Scott et al., 2003). The usual age for a client is three to 12 years, but using PT to help adolescents and adults has been gaining traction in recent years (Nash & Schaefer, 2011). Therapists can practice PT in almost any setting. According to Nash and Schaefer (2011), the two most important aspects of a play space are predictability and consistency. This enables the children to focus on playing instead of exploring a new setting.

There are three main stages in PT: (a) rapport building, (b) working through, and (c) termination. Rapport building takes place during the first sessions. These sessions focus on building an appropriate therapist-client relationship by learning about the child, learning about his or her experiences, having the child learn about the playroom, and, depending on the therapist’s theoretical orientation, teaching the child about the therapy (Nash & Schaefer, 2011). The next phase, working through, is the longest. This is the phase in which most of the therapeutic change occurs. In this phase, play themes become apparent and gives clues about the child’s inner world. How these themes are utilized in therapy will depend on the theoretical orientation of the therapist; for example, a Jungian therapist will relate the themes to archetypes and a Freudian therapist will relate them to cathected libidinal energies. The final stage, termination, is for the child and family to recognize the changes, make the changes endemic to their family culture, and organize a path for continued improvements (Nash & Schaefer, 2011).

Types of play therapyPT can be divided into four broad categories based on the theoretical orientation of the therapist: psychodynamic, humanistic, systemic, and emerging models (Gil, 2011; Green, 2011; Kottman, 2011; Levy, 2011; Munns, 2011; Sweeney & Landreth, 2011). Each of these broad categories has many specific manifestations and practices. The different theoretical orientations may impact how therapy is conducted, what therapeutic tools are used, and how the therapist relates to the client (Gil, 2011; Ginsberg, 2011; Green, 2011; Kottman, 2011; Levy, 2011; Munns, 2011; Norton & Norton, 2011; Oaklander, 2011; O’Connor, 2011; Sweeney, 2011; Sweeney & Landreth, 2011; VanFleet, 2011).

It should be noted that many parents are trained to incorporate PT at home. Parents who have received the training to perform PT sessions in their own home have reported an increase in empathy and acceptance toward their children, as well as a reduction in parental stress levels (Nash & Schaefer, 2011). It has been noted that when parents use PT at home, children’s behavior, emotional adjustment and self-concept improves, while anxiety levels decrease (Nolan et al., 2002, Scott et al., 2003).

Cognitive-behavioral therapy

Cognitive-Behavioral Therapy (CBT) is the other most common form of psychotherapy used today, and is supported by a wealth of empirical evidence (Arnkoff & Glass, 1992; Grasso, Marquez, Joselow, & Webb, 2011; Saywitz et al., 2000). Because of the widespread use of CBT, many sub-forms of therapy have developed. Trauma-focused CBT (TF-CBT), which we examine in this meta-analysis, is one of the preferred methods of therapy for victims of trauma (Grasso et al., 2011).

CBT typically requires 12-16 sessions divided into three phases. In the first phase, the child and parent undergo gradual exposure to traumatic content, and the discussion becomes increasingly focused on the child’s personal experience in reaction to the stimulus. In the second phase the child begins to develop a detailed account of the traumatic event, called a “trauma narrative,” that functions as a means of therapeutic exposure and helps to facilitate emotional and cognitive processing. The parent meets with the therapist separately until the trauma narrative is complete. If the child is comfortable and the parent ready, the therapist encourages the child to share the narrative with the parent in a conjoined session. In the third phase, treatment focuses on safety skills and future development (Grasso et al., 2011). These sessions are designed to reduce behavioral and emotional problems from child trauma exposure. The sessions should also empower children and parents with a knowledge-base essential for therapy. Another goal of the sessions is to provide a skillset that includes education about trauma, posttraumatic stress, treatment rationale, relaxation techniques, emotion identification, regulation skills, and cognitive coping strategies.

History of CBT

CBT is a blending of cognitive and behavior therapy techniques that was fully developed by the 1950’s with the work on cognitive theory by Albert Ellis and the cognitive revolution, though its roots originated much further back than the behavior modification work of Pavlov, investigated by Watson and Skinner. Built on techniques for behavior therapy that have been around for most of recorded history (e.g., putting a spider in alcohol to prevent alcoholics from drinking, giving candy to reward positive behaviors, or spanking when a child hits someone; Glass & Arnkoff, 1992), CBT is a blending of behavioral and cognitive therapies that are used to treat a wide variety of psychopathologies.

Trauma-focused CBT (TF-CBT) is one of the most preferred subtypes of CBT for helping victims of trauma; Yet, many feel that the trauma narrative is very helpful for victims of all forms of abuse (Classen et al., 2011; Misurell et al., 2011). TF-CBT adds a trauma narrative to the therapeutic repertoire of CBT. This narrative is to help the victim be able to process the trauma and work through the emotions that arise. Additionally, TF-CBT focuses on the trauma and gears all education to the trauma and to similar traumas (Feather & Ronan, 2006).

Methods

To find articles for the meta-analysis, we searched Academic Search Premier, PsycARTICLES, Psychological and Behavioral Sciences Collection, PsychINFO, Google Scholar, and Digital Dissertation for “child,” “abuse,” “treatment,” “outcomes,” “play therapy,” “trauma focused,” “cognitive behavior therapy,” “CBT,” “PT,” in various aggregates and using various truncators (e.g., AND, OR, *). Articles that were included in the meta-analysis needed to (a) be published after 1980, (b) have a maximum age of 18 for participants undergoing therapy, (c) contain participants who were victims of physical or sexual abuse, (d) be a quantitative article that either reported effect sizes or instead contain enough data for effect sizes to be calculated, and (e) contain a report with treatment that was either PT or TF-CBT. All articles that met the inclusion criteria were used in the data analysis. We found 31 effect sizes in ten peer-reviewed articles and two doctoral dissertations that met the inclusion criteria; however, we noticed that the doctoral dissertations had samples that were identical to some of the peer-reviewed articles. We found that they were based on the same study, and in an effort to limit duplicate data, we only included the information from the peer-reviewed articles (Reams, 1987; Zion, 1999).

Coding Procedures

Table 1 displays some of the important variables that were coded. PT was coded as a zero and TF-CBT was coded as a one. Sexual abuse was coded as a zero and physical abuse was coded as a one.

Table 1: Articles Used in Meta-Analyses and Study.  PT is coded as 0 and TF-CBT is coded as 1. All articles that met inclusion criteria were included in the meta-analysis.

Table 1: Articles Used in Meta-Analyses and Study. PT is coded as 0 and TF-CBT is coded as 1. All articles that met inclusion criteria were included in the meta-analysis.

We performed a meta-analysis of effect sizes to combine the various outcome measures used in the articles into five different outcome measures: global, internalizing, externalizing, sexual, and parent report. We decided on these measures because symptoms most often fall under these general categories according to the assessments the researchers gave participants (Carpentier, Silovsky, & Chaffin, 2006; Cohen, Deblinger, Mannarino, & Steer, 2004; Cohen & Mannarino 1998; Cohen, Mannarino, & Knudsen, 2005; Deblinger, Mannarino, Cohen, Runyon, & Steer, 2011; Feather & Ronan 2006; Grasso, Marquez, Joselow, & Webb 2011; Reams & Friedrich, 1994; Reyes & Asbrand, 2005; Scott, Burlingame et al., 2003). A meta-analysis is a procedure in which a set of studies can be combined into one larger study through statistical procedures (Cooper, 2010; Lipsey & Wilson, 2001). This permits the calculation of an overall effect size using all of the studies’ data, resulting in both a more convenient and more objective synopsis of the body of research. In its classical form researchers conducting a meta-analysis calculate an average effect size weighted by the sample size of each study. This gives greater influence to studies with larger sample sizes because a larger sample sizes tends to be more generalizable. A researcher conducting a meta-analysis typically tests the previous studies for heterogeneity in their effect sizes in order to determine if there is a systematic relationship between study characteristics (e.g., age of sample, type of intervention, sample size) and the strength of each study’s effect size.

When there is a choice between pre- and post-test comparison groups and another therapy as the comparison group, the other therapy comparison group is included in the meta-analysis and not the pre- and post-test comparison groups. When several outcome measures that are used to ascertain the same type of outcome are used in the same study the effect sizes are averaged into one effect size for that measure. For example, three effect sizes were calculated from data provided by Reyes and Asbrand (2005). Those effect sizes were then averaged into a single sexual measure. This process was done for all the overlapping measures in each study prior to performing any calculations in the meta-analysis. This was done so that no study would have a disproportionate weight in the final analysis. Separate meta-analyses were conducted for PT and TF-CBT because of the theoretical and practical differences.

Results

Ten articles were found to meet the inclusion criteria and therefore were included in the meta-analysis. These articles consisted of three articles about PT, six about TF-CBT, and one that contained both. Authors of these studies reported 31 effect sizes: 12 effect sizes were for PT outcomes and 23 effect sizes were for TF-CBT outcomes. There were 11 effect sizes for internalizing outcomes (four PT and seven TF-CBT), 11 for externalizing outcomes (four PT and seven TF-CBT), seven for sexual outcomes (two PT and five TF-CBT), and six for parent reported measures (two PT and four TF-CBT). Table two displays the average effect sizes.

Table 2: Effect Sizes Included in this Meta-Analysis. Internalizing measures gain insight into what was going on internally with the clients. External measures give insight into external behaviors. Sexual measures give insight into any sexual dysfunction or sexual activities that are age inappropriate. Parent reports are measures given to parents to understand what they are seeing with their child. Average effect size was found through meta-analytic techniques.

Table 2: Effect Sizes Included in this Meta-Analysis. Internalizing measures gain insight into what was going on internally with the clients. External measures give insight into external behaviors. Sexual measures give insight into any sexual dysfunction or sexual activities that are age inappropriate. Parent reports are measures given to parents to understand what they are seeing with their child. Average effect size was found through meta-analytic techniques.

We found that TF-CBT had a global average of d = .212 (95% CI = [0.175,.250]), an internalizing average of d = .229 (95% CI = [0.152,.306]), an externalizing average of d = .187 (95% CI = [0.118, .0257]), a sexual average of d = .162 (95% CI = [0.272,.442]), and a parent report average of d = .357 (95% CI = [0.272,.442]). PT had a global average of d = .095 (95% CI = [0.032,.157]), an internalizing average of d =.096 (95% CI = [-0.013,.206]), an externalizing average of d = .255 (95% CI = [0.144,.366]), a sexual average of d = .042 (95% CI = [-0.098,.182]), and a parent report average of d = -0.15 (95% CI = [-0.304,.003]). These average effect sizes are reported in Table three. The table also shows that seven of the ten overall effect sizes were statistically significant (d < .05). All three effect sizes that were not statistically significant were for PT outcomes.

Table 3: Mean Effect Sizes of the Impact of Cognitive Behavioral Therapy (CBT) and Play Therapy (PT). Effect sizes that are statistically significant (p < .05) are in bold. All of the statistically significant effect sizes indicate positive results for those measures.

Table 3: Mean Effect Sizes of the Impact of Cognitive Behavioral Therapy (CBT) and Play Therapy (PT). Effect sizes that are statistically significant (p < .05) are in bold. All of the statistically significant effect sizes indicate positive results for those measures.

Analysis of Homogeneity

In accordance with standard practice (Cooper, 2010; Lipsey & Wilson, 2001), we performed a test of homogeneity of effect sizes. The effect sizes for the TF-CBT analysis were all homogenous, with Qglobal = 11.173 (df = 22, Qinternalizing = 1.01 (df = 6, p = .985), Qexternalizing = 2.476 (df = 6, p = .871), Qsexual =.362 (df = 4, p = .959), and Qparent = 3.746 (df = 3, p = .290). This shows that all of the effect sizes are statistically equal; we therefore deemed it unnecessary to examine if any study characteristics had an impact on the effect sizes. Figures one and two provide a visual representation of this homogeneity by displaying the effect sizes and CI’s for all outcome measures for both PT and TF-CBT.

Figure 1. Effect sizes and 95% CI’s for play therapy, all outcomes. All of the effect sizes are homogeneous indicating that the effect sizes were statistically comparable. Horizontal line represents mean effect size (d = 0.095).  Figure 2. Effect sizes and 95% CI’s for trauma focused cognitive-behavior therapy, all outcomes. All of the effect sizes are homogeneous indicating that the effect sizes were statistically comparable. Horizontal line represents mean effect size (d = 0.212).

Figure 1. Effect sizes and 95% CI’s for play therapy, all outcomes. All of the effect sizes are homogeneous indicating that the effect sizes were statistically comparable. Horizontal line represents mean effect size (d = 0.095).
Figure 2. Effect sizes and 95% CI’s for trauma focused cognitive-behavior therapy, all outcomes. All of the effect sizes are homogeneous indicating that the effect sizes were statistically comparable. Horizontal line represents mean effect size (d = 0.212).

The PT results were more heterogeneous, with Qglobal = 38.789 (df = 11, p < .001), Qinternalizing = 6.664 (df = 3, p = .083), Qexternalizing = 7.903 (df = 3, p = .048), Qsexual = 1.23 (df = 1, p = .267), and Qparent = 3.505 (df = 1, p = .061). These results indicate that the effect sizes are not statistically similar, but this is due entirely to the Carpentier et al. (2006) study. Carpentier and colleagues compared PT to TF-CBT and found that TF-CBT was more effective in helping the participants. This caused the effect sizes for PT to be negative, skewing the distribution of effect sizes in the PT results. When the Carpentier et al. (2006) article was eliminated from the results, all of the PT effect sizes were homogeneous. Because all heterogeneity was due to a single article, we did not find it necessary to conduct any of the causes of the heterogeneity. Supplemental figures that display the effect sizes for each of the outcome variables for both TF-CBT and PT are available from the journal’s online archive.

Discussion

In this study we used meta-analytic techniques to calculate and analyze the effect sizes of the results of studies pertaining to the use of TF-CBT and PT as treatments for child victims of abuse. We found that both participants in both PT and TF-CBT groups had more positive outcomes at the end of the therapy than the control or comparison groups (e.g., other therapy comparisons, pretest/posttest, etc.). Additionally, we found that those who went through TF-CBT treatment had more improvement across all measures—except externalizing outcomes—after therapy than those who went through a PT treatment. This is supported by the 2006 Carpentier et al. article, which showed that TF-CBT was more effective than PT, producing more positive outcomes for participants.

We were not surprised that clients in PT treatments have more improvement with eternalizing outcomes than those clients who were in TF-CBT treatments. This is because PT relies on play as the mode for what is discussed in therapy (Nash & Schaefer, 2011). External behaviors may be more accessible to the therapist conducting PT because the PT therapist will be able to see maladaptive and abnormal behaviors in a play setting. For example, difficulty sharing, violent behaviors, or withdrawal may manifest as a child plays and may be easier for a therapist to target. Another factor that may be involved in this improvement could be that the play therapist sets limits on the child in response to externalizing behaviors. According to Nash and Schaefer (2011), these behaviors include unsafe behaviors, physical aggression, and socially unacceptable behaviors, all of which were classified as externalizing behaviors in this meta-analysis.

Homogeneity of Effect Sizes

A surprising finding was the homogeneity of the effect sizes, especially because in many research studies pretest/posttest designs tend to show larger effect sizes than studies with control groups (Kosters, Burlingame, Nachtigall, & Strauss, 2006). The null hypothesis of homogeneous effect sizes was retained for the TF-CBT studies, indicating that the differences among effect could be due solely to sampling error. There was heterogeneity among the effect sizes for PT, but this was due to the study (Carpentier et al, 2006). Because heterogeneity was largely not present, it was not necessary to conduct analysis on potential moderator variables. This finding is unusual because often heterogeneity among effect sizes if found among meta-analyses (Cooper, 2010).

The homogeneous effect sizes in this meta-analysis have three possible explanations. First, the sampling error does explain the variation among the effect sizes. Both therapies are effective across different age groups, populations, genders, and ethnicities. The second potential reason for homogeneity may be due to our decision to average effect sizes within studies if they measured the same type of outcome (e.g., externalizing outcomes, sexual outcomes). This may reduce variability of effect sizes and make the null hypothesis of homogeneity harder to reject. Another methodological artifact in the meta-analysis may arise from how the original studies were designed. Most, if not all, of the studies in this meta-analysis were designed and conducted by researchers who are practitioners or proponents of the therapies under investigation (Cohen & Mannarino, 2008). This may lead the original authors to inadvertently design studies that are simply more likely to show benefits of TF-CBT and PT. This would be a form of an expectancy effect (Neuman, 2012).

Limitations

As with all meta-analyses, the validity of this study is only as strong as the studies included for analysis; “we may not like all of the ingredients that go into the sausage, but the [meta-analytic] chef can only work with the ingredients provided by the literature” (Thompson & Vacha-Haase, 2000). The total number of articles included for meta-analysis is small. This is due to the focused nature of this study. Many articles are available about PT, TF-CBT, and child abuse victims, but there are few empirical articles available about the effectiveness of these therapies with child survivors of abuse. The small number of studies was further reduced because TF-CBT and PT were analyzed in separate meta-analyses.

Future Research

Because studies performed with a wait-list control group were completely absent from the literature, we were missing an ingredient in this meta-analytic “sausage.” The original authors of the articles in this meta-analysis always compared the TF-CBT or PT groups to either the same clients’ baseline scores of the outcomes or another group in a different therapy. This precludes the possibility of studying the absolute effectiveness of either therapy. Because sometimes, as the old proverb says, “time heals all wounds,” it is possible that the outcomes observed in this meta-analysis are not any more positive than what would be observed if the clients were on a wait-list. Conducting a true experiment (an experiment that requires a treatment and random assignment) with a pre- and posttest and a wait-list control group would provide the most rigorous results. However, not providing therapy to those who are suffering may be considered unethical for many therapists because early treatment is associated with greater positive outcomes for the client (Hetzel-Riggin et al., 2007).

Child victims of any form of trauma deserve to have the best treatments available to them, but the research on these therapies is not rigorous enough. TF-CBT and PT are the most widely used therapies in aiding this group of clients (Kaduson, 2011; Misurell et al., 2011; Scott et al., 2003), but we were surprised by the lack of empirically based studies that supported treatments for aiding child victims of abuse. Additionally, few studies reported the average age of abuse, length of abuse, severity of abuse, or even the relationship to the abuser. All of these variables may influence the client’s recovery. If researchers reported these variables, we might be able to examine the relationship between abuse length and treatment success, or if the abuse length impacted the effectiveness of a treatment in aiding the victim’s recovery. Logically, it is hypothesized that the more severe the abuse, the more severe the damage to the abused, and therefore, would impact treatment path. But if this information is not reported, then a relationship cannot be discovered.

Conclusion

Based on the results of this meta-analysis, when administered correctly, the idea that neither TF-CBT nor PT harms clients is greatly supported. Both forms of therapy produce more favorable results than the comparison groups. Because PT seems to have more effect on externalizing problems, it may be of more use when dealing with children who have more severe externalizing behavioral problems. However, if the child is having additional problems, then TF-CBT may be more effective. This is important information for those who regularly work with abused children.

We recommend several changes to how research on this sensitive population is conducted and reported in the future. First, researchers should collect and report data about child clients’ trauma etiology and any past history of abuse, including length and relationship to abuser. Second, when ethically possible, researchers should conduct true experiments with a waitlist control group. Finally, given the relatively small combined sample size in this meta-analysis (n = 762), we urge other researchers to consider replication studies (Makel & Plucker, 2014, and Warne, 2014,). This allows researchers to mitigate any methodological artifacts in previous studies and show the impact of treatment across several groups.

Acknowledgements

This research was funded by a grant awarded by Utah Valley University’s Student Undergraduate Research Fellowship (SURF). This article is based on the first author’s senior honors thesis. We acknowledge the feedback and support of Matthew Draper, Steven Clark, and Kate McPherson.

References

References marked with an asterisk indicate studies included in the meta-analysis.

Arnkoff, D. B., & Glass, C. R. (1992). Cognitive therapy and psychotherapy integration. In D. K. Freedheim (Ed.), History of psychotherapy (pp. 657-694). Washington, DC: American Psychological Association.

Barnett, O. W., Miller-Perrin, C. L., & Perrin, R. D. (2011). Family violence across the lifespan an introduction (3rd ed.). Thousand Oaks, CA: Sage.

*Carpentier, M. Y., Silovsky, J. F., & Chaffin, M. (2006). Randomized trial of treatment for children with sexual behavior problems: Ten-year follow-up. Journal of Counseling and Clinical Psychology, 74(3), 482-488.

Classen, C. C., Cavanaugh, C. E., Kaupp, J. W., Aggarwal, R., Palesh, O. G., Koopman, C., Kaupp, C., Kraemer, H. C., Aggarwal, R., & Spiegel, D. (2011). A comparison of trauma-focused and present-focused group therapy for survivors of childhood sexual abuse: A randomized controlled trial. Psychological Trauma: Theory, Research, Practice, and Policy, 3(1), 84-93.

*Cohen, J. A., Deblinger, E., Mannarino, A. P., & Steer, R. A. (2004). A multisite, randomized, controlled trial for children with sexual abuse-related PTSD symptoms. Journal of American Academic Child Adolescent Psychiatry, 41(4)3, 393-402.

Cohen, J. A., & Mannarino, A. P. (2008). Trauma-focused cognitive behavioral therapy for children and parents. Child and Adolescent Mental Health, 13(4), 158-162.

*Cohen, J.A., & Mannarino, A. P. (1998). Interventions for sexually abused children: Initial treatment outcome findings. Child Maltreatment, 3(1), 17-26.

*Cohen, J. A., Mannarino, A. P., & Knudsen, K. (2005). Treating sexually abused children: 1 year follow-up controlled trial. Child Abuse & Neglect, 29(2), 135-145.

Cooper, H. (2010). Research synthesis and meta-analysis. Thousand Oaks, CA: Sage.

*Deblinger, E., Mannarino, A. P., Cohen, J. A., Runyon, M. K., & Steer, R. A. (2011). Trauma-focused cognitive behavioral therapy for children: Impact of trauma narrative and treatment length. Depression and Anxiety, 28(1), 67-75.

Dubowitz, H., Newton R. R., Litrownik A. J., Lewis, T., Briggs, E. C., Thompson, R., English, D., Lee, L., Feerick, M. M. (2005). Examination of a concept model of child neglect. Child Maltreatment, 10(2), 173-189.

Dubowitz, H., Papas, M. A., Black, M. M., & Starr, R. H. (2002). Child neglect: Outcomes in high-risk urban preschoolers. Pediatrics, 109(6), 1100-1107.

Easton, S. D., Coohey, C., O’leary, P., Zhang, Y., & Hua, L. (2011). The effect of sexual abuse on psychosexual functioning during adulthood. The Journal of Family Violence, 26(1), 41-50.

*Feather, J. S., & Ronan, K. R. (2006). Trauma-focused cognitive behavioural therapy for abused children with posttraumatic stress disorder: A pilot study. New Zealand Journal of Psychology, 35(3), 132-145.

Gil, E. (2011). Family play therapy: Igniting creative energy, valuing metaphors, and making changes from the inside out. In C. Schaefer (Ed.), Foundations of play therapy (pp. 207-225). Hoboken, NJ: John Wiley & Sons.

Ginsberg, B., G. (2011). Congruence in nondirective play and filial therapy: Response to Ryan and Courtney. International Journal of Play Therapy, 20(3), 109-123.

Glass, C. R., & Arnkoff, D. B. (1992). Behavior therapy. In D. K. Freedheim (Ed.), History of psychotherapy (pp. 587-656). Washington, DC: American Psychological Association.

*Grasso, D., J., Marquez, Y., Joselow, B., & Webb, C. (2011). Evidence based case study trauma-focused cognitive behavioral therapy of a child with posttraumatic stress disorder. Psychotherapy, 48(2), 188-197.

Green, E., J. (2008). Reenvisioning Jungian analytical play therapy with child sexual assault survivors. International Journal of Play Therapy, 17(2), 102-121.

Green, E. J. (2011). Jungian analytical play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 61-85). Hoboken, NJ: John Wiley & Sons.

Hetzel-Riggin, M. D., Brausch, A. M., & Montgomery, B. S. (2007). A meta-analytic investigation of therapy modality outcomes for sexually abused children and adolescents: An exploratory study. Child Abuse & Neglect, 31(2), 125-141.

Kaduson, H. G. (2011). Release play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 105-126). Hoboken, NJ: John Wiley & Sons.

Kosters, M., Burlingame, G. M., Nachtigall, C., & Strauss, B. (2006). A meta-analytic review of the effectiveness of inpatient group psychotherapy. Group Dynamics: Theory, Research, and Practice. 10(2), 146-163.

Levy, A. J. (2011). Psychoanalytic approaches to play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 43-59). Hoboken, NJ: John Wiley & Sons.

Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304-316.

Misurell, J. R., Springer, C., & Tryon, W. W. (2011). Game-based cognitive-behavioral therapy (GB-CBT) group program for children who have experienced sexual abuse: A preliminary investigation. Journal of Child Sexual Abuse, 20(1), 14-36.

Munns, E. (2011). Theraplay: Attachment-enhancing play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 275-296). Hoboken, NJ: John Wiley & Sons.

Nash, J. B., & Schaefer, C. E. (2011). Play therapy: Basic concepts and practices. In C. Schaefer (Ed.), Foundations of play therapy (pp. 3-13). Hoboken, NJ: John Wiley & Sons.

Neuman, W. L. (3rd). (2012). Basics of social research: Qualitative and quantitative approaches. Upper Saddle River, NJ: Pearson.

Nolan, M., Carr, A., Fitzpatrick, C., O’Flaherty, A., Keary, K., Turner, R., O’Shea, D., Smyth, P., & Tobin, G. (2002). A comparison of two programmes for victims of child sexual abuse: A treatment outcome study. Child Abuse Review, 11(2), 103-123.

Norton, C. C., & Norton, B. E. (2011). Experiential play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 187-203). Hoboken, NJ: John Wiley & Sons.

Oaklander, V. (2011). Gestalt play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 171-186). Hoboken, NJ: John Wiley & Sons.

O’Connor, K. O. (2011). Ecosystemic play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 253-272). Hoboken, NJ: John Wiley & Sons.

Reams, R. (1987). The efficacy of time-limited play therapy with maltreated preschoolers (Unpublished doctoral dissertation). University of Washington, Seattle, WA.

*Reams, R., & Friedrich, W. (1994). The efficacy of time-limited play therapy with maltreated preschoolers. Journal of Clinical Psychology, 50(6), 889-899.

*Reyes, C. J., & Asbrand, J. P. (2005). A longitudinal study assessing trauma symptoms in sexually abused children engaged in play therapy. International Journal of Play Therapy, 14(2), 25-47.

Saywitz, K. J., Mannarino, A. P., Berliner, L., & Cohen, J. A. (2000). Treatment for sexually abused children and adolescents. American Psychologist, 55(9), 1040-1049.

*Scott, T. A., Burlingame, G., Starling, M., Porter, C., & Lilly, J. P. (2003). Effects of individual client-centered play therapy on sexually abused children’s mood, self-concept, and social competence. International Journal of Play Therapy, 12(1), 7-30.

Sweeney, D. S. (2011). Group play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 227-252). Hoboken, NJ: John Wiley & Sons.

Sweeney, D. S., & Landreth, G. L. (2011). Child-centered play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 129-152). Hoboken, NJ: John Wiley & Sons.

Swenson, C., C., Schaeffer, C., M., Henggeler, S., W., Faldowski, R., & Mayhew, A., M. (2010). Multisystemic therapy for child abuse and neglect: A randomized effectiveness trial. Journal of Family Psychology, 24(4), 497-507.

Tighe, A., Pistrang, N., Baruch, G., Casdagli, L., & Butler, S. (2012). Multisystemic therapy for young offenders: Families’ experiences of therapeutic processes and outcomes. Journal of Family Psychology, 26(2), 187-197.

Thompson, B., & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable. and Psychological Measurement, 60(2), 174-195.

Trickett, P. K., Negriff, S., Ji, J., & Peckins, M. (2011). Child maltreatment and adolescent development. Journal of Research on Adolescence, 21(1), 3-20.

United States Department of Health and Human Services. (2006). Child maltreatment 2004: Reports from the states to the National Child Abuse and Neglect Data Systems—National statistics on child abuse and neglect.

VanFleet, R. (2011). Filial therapy: Strengthening family relationships with the power of play. In C. Schaefer (Ed.), Foundations of play therapy (pp. 153-169). Hoboken, NJ: John Wiley & Sons.

Warne, R. T. (2014). Two additional suggested reforms to encourage replication studies in educational research. Educational Researcher, 43(9), 465.

Zion, T. (1999). Effects of individual client-centered play therapy on sexually-abused children’s mood, self-concept, and social competence (Unpublished doctoral dissertation). Brigham Young University, Provo, UT.

Access Full PDF: A Meta-Analysis of the Effectiveness of TraumaFocused Cognitive-Behavioral Therapy and Play Therapy for Child Victims of Abuse

Bohr Revisited: Model and Spectral Lines of Helium

Abstract | Introduction | Helium Plus (He+) | Orbital Energy of Helium |Spectral Equation for Helium’s S Orbitals | Ionization Energy of Helium | Analysis | Results | Discussion |Acknowledgements | References |PDF

Abstract

Quantum and atomic physics can often be an abstract and difficult subject for students to learn. Derived here is a simple model for helium and two-electron systems that may make some concepts, such as screening, easier for students to understand. We extend Bohr’s derivation, assuming a natural electron configuration, to explain some of the spectral lines of helium (He). We assume that the two electrons are at 180° apart in a circular orbit due to their mutual repulsion. The spectral lines calculated, namely the specific wavelengths of light emitted by elements when they are energized, correspond to an assumed possible scenario where the two electrons make orbital transitions in unison. In this model we calculate an energy prediction for the unionized helium atom. We then predict the first and second ionization energies. Error between experimental and predicted ionization energies is about five percent, indicating that our model is reasonably accurate within the aforementioned parameters.

Introduction

In 1913, Niels Bohr introduced the Nobel Prize-winning quantized version of the Rutherford-Bohr model of hydrogen. The Bohr model supposed that electrons orbit the nucleus in rigid orbitals or paths. As energy was added to the atom, the electrons made transitions between orbitals. Energy would be released from the atom as light, and the wavelength of that light differed depending on the orbitals involved in the electron’s quantum transition. Bohr showed that this model gave quantitatively correct results by deriving the Rydberg Formula, which gives the wavelength of light emitted from the atom, again depending on the orbital transitions made by the electron (Bohr, 1913). Though the Bohr model of the atom accurately predicts the quantum nature of single electron atoms or ions, it is of little use in explaining more complex multi-electron atoms. Consequently the Bohr model was superseded by modern quantum mechanics where electrons in atoms are described using complicated wave functions. Here, we re-examine the helium (He) atom through the lens of the Bohr model in a way that may help students more easily understand multi-electron atoms and ions as opposed to using complex wave functions. Effects such as screening, the reduction of attraction to the nucleus via mutual repulsion, can then be simply accounted for when the electrons are considered as particles. Subsequently we explore the models use in conjunction with a scenario in which the two electrons transition between orbitals in unison to give roughly quantitative predictions.

Helium Plus (He+)

For hydrogen-like ions, Bohr demonstrated that the Rydberg Formula for the wavelength of emitted light, namely

(1)

(1)

, where RH is the Rydberg constant, Z is the atomic number, and λ is the wavelength of light emitted, could be explained by the energy differences between the quantized electron energies n. Since the Bohr model applies to hydrogen-like atoms, i.e., single-electron atoms, for the case of He+, Z=2 and RHZ2 = 4.38949264 x 107 m-1. We can use this equation to calculate the ionization potential of He+, the amount of energy required to remove that second electron from orbit, by combining [Eq. 1] with the equation for energy of a photon,

(2)

 (2)

, where h is Plank’s constant and c is the speed of light. We will use the case where n2 = ∞, with n2 being the orbital infinitely far from the nucleus and its energy being approximately equal to zero, in order to ionize the electron completely. Substituting [Eq. (1)] into [Eq. (2)] under this condition gives

(3)

(3)

. If the ion is originally in its ground state, n = 1, the result is well known and agrees reasonably well, with an error of 0.02% when compared to experimental measurements (Table 3; Charlotte E. M., 1949).

Orbital Energy of Helium

The purpose of this brief note is to show what happens if one assumes two electrons in neutral helium are diametrically opposite in a single circular orbit. First then, due to screening, the net centripetal force on each of the two electrons is equal to the attraction force minus the repulsive force 4 This is then equal to 5 where Kc is Coulomb’s constant, e is the charge on an electron, r is the radius of the circular orbit, the factor of two in the attraction force (6)is due to the two protons in the nucleus, and the factor of two in the repulsion force (7) is due to the fact that the electrons are separated by the length of two radii. This gives the electric force,

(4)

(4)

. Kinetic energy then derives from the centripetal force

(5)

(5)

, with m being the mass of an electron and v being its velocity. For each electron, kinetic energy may be expressed as 10. Electric force [Eq. (4)] can then be substituted for Fc in [Eq. (5)] to give kinetic energy as

(6)

(6)

for each electron. The net kinetic energy in the two-electron system must then be 12. Potential energy also results from the two attractive forces and a repulsive force, yielding 13. The total energy of the atom is the sum of potential and kinetic energies

(7)

 (7)

. The next step is to derive the radius of the electrons’ orbit using Bohr’s angular momentum postulate with the principle quantum number n, namely 15. We solve first for v, and then substitute it into 16, which is the combination of [Eq. (4)] and [Eq. (5)], to yield the quantized relation 17. The radius becomes

(8)

 (8)

. The radius [Eq. (8)] can then be substituted into total energy [Eq. (7)] to give 19. Alternatively, we find

(9)

(9)

, as the quantized, two-electron energy, assuming the two electrons circulate together a diametrically opposite manner.

Spectral Equation for Helium’s S Orbitals

The following derivation is to find the wavelength λ of light emitted when two electrons make a simultaneous transition, we can use [Eq. (2)] and [Eq. (9)], along with the difference in orbital energies, 21, which, when expanded, comes to the form of 22 and then

(10)

 (10)

. The constant here has a numerical value of 24, in SI units. As shown in Table 1, [Eq. (10)] gives approximately 5.5% error when compared to experimental values. However, one can use the form described in [Eq. (10)] to fit a series of spectral lines that could appear in helium’s spectrum. This fit can be used to find a Rydberg-type constant as seen in [Eq.(1)] to fit the experimental data more accurately, as demonstrated in Figure 1. We can think of this fit as correcting for an adjustment to the columbic forces interacting, because the forces may slightly vary over time.

Ionization Energy of Helium

In this section we derive the equations that predict the ionization energy of neutral helium to helium plus (He+). The ground state energy from [Eq. (9)] is 25. We can calculate the energy required to ionize the first electron by starting with the kinetic energy of one electron [Eq. (6)]: 26. Potential energy for one electron in the system can be given as, 27. The total energy is then 28. Radius from [Eq. (8)] is then substituted into the radius of the electron’s circular orbit to give

(11)

(11)

Analysis

We use [Eq. (10)], which follows this Bohr-like model’s assumptions, to calculate a series of lines in the visible portion of helium’s emission spectrum. The series results from various two-electron simultaneous transitions to the state with principle quantum number n=5. Percent error is calculated based on the experimental lines that these could represent, shown in Table 1 (Richard & Peter, 2000). We then determine an effective constant consistent with experimental results, which may be thought of as an adjustment for the electron correlation and screening (Figure 1). Subsequently, we show the extent of agreement between experimental data with [Eq. (10)] using the modified constant as derived from Figure 1 (Table 2). The radius of helium in its ground state, i.e. the lowest attainable energy and orbit, calculated to be 30.24pm from [Eq. (8)] compared within a reasonable degree of error of two percent with the accepted experimental value of 31pm (Charlotte E. M., 1949).

Results

Table 1: Spectral Series of Visible Light. [Eq. (10)] is used to calculate a series of spectral lines in the visible spectrum that ends on the orbital n1=5 with percent errors calculated respectively (Richard & Peter, 2000).

Table 1: Spectral Series of Visible Light. [Eq. (10)] is used to calculate a series of spectral lines in the visible spectrum that ends on the orbital n1=5 with percent errors calculated respectively (Richard & Peter, 2000).

 

Figure 1: Experimental Constant Adjustment. The adjusted Rydberg-like constant is calculated (63849000 m-1) using the least square method and plotted graphically with experimental data (Richard & Peter, 2000).

Figure 1: Experimental Constant Adjustment. The adjusted Rydberg-like constant is calculated (63849000 m-1) using the least square method and plotted graphically with experimental data (Richard & Peter, 2000).

 

Table 2: Modified Spectral Series in Visible light. Using the adjusted constant, the series is recalculated and compared to the experimental data again (Richard & Peter, 2000).

Table 2: Modified Spectral Series in Visible light. Using the adjusted constant, the series is recalculated and compared to the experimental data again (Richard & Peter, 2000).

 

Table 3: Helium Ionization Energies. The predicted values of ionization energies, from  [Eq. (11)], [Eq. (3)], and [Eq. (10)] respectively, are compared to the experimental results, yielding percent errors (Charlotte E. M., 1949).

Table 3: Helium Ionization Energies. The predicted values of ionization energies, from
[Eq. (11)], [Eq. (3)], and [Eq. (10)] respectively, are compared to the experimental results, yielding percent errors (Charlotte E. M., 1949).

Discussion

In this Bohr-like model of the atom, the two electrons in helium are assumed to be 180° from each other and making quantum transitions concurrently. Though this quasi-classical model cannot explain all of the spectral lines, it does present one possibility for a quantum transition scenario of the S orbitals, i.e. the spherical electron orbital in an atom. Another possible transition scenario is a model of excited helium where one electron is in ground state and the other is excited, in which some shielding will take effect, changing the energy emitted as the excited electron transitions to a lower state. One of the most influential factors in why the Bohr and Bohr-like models were abandoned was due to the discrepancies between experimental data and theoretical predictions. The predicted ionization energy of helium to He+ has a significant percent error when compared to measured results (Charlotte E. M., 1949). This may be due to the fact that the Bohr model does not account for the quantum spin of the electrons. Deeney and O’Leary (2012) discuss in detail how in the modern theory, electrons with aligned spins, i.e., the triplet state, have the lowest attainable energy. However, the Pauli Exclusion Principle forbids this possibility, resulting in a state of higher energy with anti-parallel electrons, which in turn would yield lower ionization energies as shown in Table 3. Although the Bohr model has since been superseded by the Schrödinger theory, it is hoped that this modified model could be of use to help students understand and conceptualize the quantum effects of screening and its interactions in multi-electron atoms and ions.

Acknowledgements

The author would like to thank his mentors, Dennis Symalla and Dr. William Schwalm, for their indispensable ideas and support.

References

Bohr, N. (1913). On the Constitution of Atoms and Molecules. Philosophical Magazine, 26, 1

Charlotte E. M. (1949). Atomic Energy Levels Vol. 1. Circular of the National Bureau of Standards

Deeney, F. A., & O’Leary, J. P. (2012). The Effects of the Pauli Exclusion Principle in Determining the Ionization Energies of the Helium atom and Helium-like Ions. European Journal of Physics, 33, 667-675

Gasiorowicz, S. (2003). Quantum Physics (3rd ed.). New York, NY: Wiley Richard, P., & Peter L. (2000). Optical Emission Lines of the Elements.

England, Wiley & Sons Ltd Zattili, N. (2009). Quantum Mechanics: concepts and applications 2nd ed. New York, NY: Wiley

Access Full PDF: Bohr Revisited: Model and Spectral Lines of Helium

Gene Therapy: A Toolkit for Targeting Cancer

Abstract | Introduction | Tools for Gene Therapy | Gene Therapy to Target Cancer | Discussion | Conclusions |Acknowledgements |
References |PDF

Abstract

Cancer causes one in seven deaths worldwide, making it one of the most important issues in the world of biotechnology today. Current cancer therapies, including chemotherapy and radiotherapy, have severe side effects and often prove ineffective at completely eradicating malignant cells. Therefore, a more selective method of targeting tumour cells must be designed. Gene therapy holds great potential to selectively target cancer cells, allowing the treatment to effectively destroy the cancer while leaving healthy tissues intact. In order to develop a gene therapy treatment, two main obstacles must be overcome: a therapeutic agent must be developed to facilitate genetic changes, and a delivery method must be optimized to insert the therapeutic agent into target cells. Recent advancements in both the design of the therapeutic agent and the delivery method allow changes in both the genome and in gene expression to be performed in the target cells with a high degree of accuracy and efficiency. This review highlights several evolving technologies currently being developed for gene therapy, as well as strategies that could be employed using these technologies to treat cancer. Although not currently in widespread use, gene therapy is extremely promising as a treatment for cancer.

Introduction

Cancer is estimated to cause about one in seven deaths worldwide (American Cancer Society, 2016). This disease is characterized by the transformation of healthy tissue into malignant and invasive tumour tissue that rapidly divides and takes up resources needed by other cells (Hanahan & Weinberg, 2011). Although effective treatments for some forms of cancer are available, losses are astounding; in 2012 there were over 14 million new cases of cancer, and 8.2 million cancer related deaths (Siegel, Miller, & Jemal, 2015).

There are hallmarks common to cancer tissue which can be used to differentiate malignant cells from healthy ones. These include dysregulation of the cell cycle, cell growth and differentiation, and apoptosis (Hanahan & Weinberg, 2011). Cancer spreads between different parts of the body after the earliest stages in a process known as metastasis (Hanahan & Weinberg, 2011). Metastasis adds a layer of complexity to treatment because metastatic cancers are both mobile and heterogeneous. One of the greatest challenges in treating cancer is the extreme heterogeneity of the disease; different types of tumours often display drastically different genetic and phenotypic characteristics. This heterogeneity ultimately makes cancers highly resistant to almost all forms of treatment (Hanahan & Weinberg, 2011).

The goal of cancer therapy is to specifically eliminate cancerous tissue while minimally impacting healthy tissue, and to maintain the effects of therapy over time without harmful side effects. Currently, common treatment options for cancer include chemotherapy, radiotherapy, and surgery. Chemotherapy uses chemical agents to target cells undergoing DNA replication, a strategy which effectively kills rapidly proliferating cells (Urruticoechea et al., 2010). Radiotherapy also targets rapidly growing cells, but uses radiation rather than chemical agents to kill tumour cells. However, tumour cells often prove resistant to chemotherapy and radiotherapy due to their increased viability and growth rate relative to healthy cells (Kaliberov & Buchsbaum, 2012). Surgery involves the removal of malignant tissues, an approach which is very effective at removing large solid tumours but is often limited in treating small or metastatic tumours (Urruticoechea et al., 2010). Clearly, there is a need for new cancer treatments which are highly effective at targeting cancer cells, while being selective enough to leave healthy cells untouched.

Unlike chemotherapy and radiotherapy, gene therapy has the potential to target tumour cells with a high degree of accuracy. However, current gene therapies to treat cancer are still in experimental stages. There are many technologies available and it can be confusing to distinguish between them and evaluate the different strategies. This article will review progress in the gene therapy field along with potential applications to cancer.

Genetic Basis of Cancer

The abnormal regulation of cancer cells is caused by mutations in two classes of potentially harmful genes: proto-oncogenes and tumour suppressors. Proto-oncogenes become cancer-causing oncogenes through ‘gain-of-function’ mutations which increase the impact of the gene to harmful levels by, for example, creating a constitutively active protein product. ‘Loss of function’ mutations in tumour suppressor genes also facilitate cancer development, as they prevent the gene from producing enough of a useful protein product that contributes to growth regulation. Tumour suppressor genes perform functions such as arresting cell division or initiating apoptosis. As such, loss-of-function mutations in tumour suppressor genes contribute to the uncontrolled proliferation observed in cancer (Hanahan & Weinberg, 2011).

Gene therapy can be used to target both oncogenes and tumour suppressor genes. Treatments for the former seek to counter or inactivate gene expression and to lower gene product formation, while treatments for the latter seek to restore gene expression. Gene therapy has been successful in treating genetic diseases with single gene defects, including immune disorders (e.g. Wiskott-Aldrich syndrome), blood cell disorders (e.g. beta-thalassaemia) and metabolism disorders (e.g. X-linked adrenoleukodystrophy) (Braun et al., 2014; Cartier et al., 2012; Malik & Arumugam, 2005). Gene therapy can be targeted to both germ and somatic cells, meaning that it is possible for genetic alterations to be passed on to children.

Gene Therapy Strategy

In order to modulate the expression of oncogenes and tumour suppressors, two main challenges must be overcome. The first challenge is to create a therapeutic agent which alters gene expression, and the second challenge is to deliver the desired therapeutic agent to the target cells. In the early years of gene therapy, both of these challenges were met in a trial that tested the use of a retroviral delivery method of a BRCA1 tumour suppressor gene splice variant to treat breast cancer (Tait, Obermiller, Hatmaker, Redlin-frazier, & Holt, 1999). During Phase I clinical trials, patients showed tumour suppression with highly effective gene transfer and little immune response, particularly in small tumours. However, Phase II patients showed no response to this form of therapy, because they had developed antibodies against the retroviral envelope (Tait et al., 1999). This immune response is a common shortfall of retroviral vectors (Wu & Dunbar, 2011).

Another well-known study used bone marrow cells transduced with a retrovirus containing a therapeutic gene to treat X-linked immunodeficiency (SCID-X1). Although immune cell count and function seemed to have been normalized in the adolescent patients, four of the nine treated patients developed leukemia in later stages of the treatment (Hacein-Bey-Abina et al., 2008). The cause of this drastic side effect was found to be retroviral-mediated insertion of the therapeutic γC cytokine receptor into an unintended target, which altered a proto-oncogene involved in T cell self-renewal and differentiation (McCormack et al., 2010). Since regulation of T-cell fate was disrupted, the treated cells started to multiply uncontrollably, ultimately leading to cancer.

In order to avoid unintended effects, new methods of gene therapy must target pathogenic mutations in a highly specific manner. Unlike the retroviral insertion strategy used in the SCID-X1 trials, therapeutic genes must be delivered to a specific area of the genome or transcriptome. As gene therapy progresses, safer and more effective choices of therapeutic agents and delivery methods must be explored to avoid the shortfalls of viral vectors as outlined above. Determining the appropriate gene targets will require a more comprehensive understanding of different tumours at a molecular level, a challenge which is outside of the scope of this review. This article will focus on possible approaches to combat cancer once an important proto-oncogene or tumour suppressor gene has been identified. This review covers several promising methods of introducing or silencing target genes once they have been identified, as well as how these methods may be used in combination to optimize the effectiveness of the treatment.

Although the field of gene therapy has advanced rapidly in the past few decades, there are several issues still hindering the development of effective gene therapy treatments. These problems include low efficiency of gene transfer, failure to deliver genes larger than 5kb in size, regulation of transgene expression in the host cells, and ineffectiveness against autosomal disorders (Yang & Walsh, 2005). This review aims to explain some of the most useful components of the gene therapy toolkit that are applicable towards treating cancers, and catalogue their strengths along with their shortfalls.

Tools for Gene Therapy

Part 1: Designing the Therapeutic Agent

The first challenge after identifying the target gene to be used for gene therapy is to construct a therapeutic agent to alter the expression of the gene of interest. This therapeutic agent must be designed with three main concerns in mind: specificity, efficiency, and transiency. Specificity refers to the ability of the therapeutic agent to alter the expression of the target gene in the target cells without causing undesirable mutations in other parts of the genome or in healthy tissues. Efficiency is primarily concerned with the ability of the therapeutic agent to modify the genome or gene expression at a low dosage. Another important factor is the transiency of treatment, i.e. how long the alterations in gene expression will last in the patient. There are advantages and disadvantages to both transient and lasting variations of gene therapy, and the choice of therapeutic strategy should be informed by the individual case; specificity and efficacy, on the other hand, should always be maximized in order to give the best possible outcome.

SmaRT: Gene Silencing through Pre-mRNA

Spliceosome-mediated RNA trans-splicing (SmaRT) is a technique which corrects mRNA after transcription. Trans-splicing occurs through ligation of exons from different transcripts. Cis-splicing, on the other hand, refers to the conventional eukaryotic mechanism of splicing of a single transcript. Pre-mRNA trans-splicing molecules (PTMs) can be designed to carry a binding domain that targets a specific intron in the normal pre-mRNA (Yang & Walsh, 2005). Together with the spliceosome, the PTM can cause trans-splicing of pre-mRNA with up to 80% efficiency as compared to cis-splicing (Yang & Walsh, 2005). It can be used to repair the mRNA transcripts of aberrant genes, but the utility is that the effects are relatively reversible. However, designing the PTM is not easy. One approach is to randomly generate a collection of sequences, then use fluorescence-activated cell sorting (FACS) of cells containing these sequences to identify the optimal construct (Yang & Walsh, 2005). In this case, each PTM has a portion of a gene which encodes the green fluorescent protein, and FACS allows for selection of successful trans-splicing.

TALENs and ZFNs: Site-Specific Modification

While SmaRT targets pre-mRNA or mRNA and is therefore reversible, other methods of gene therapy cause more permanent alterations by modifying the genome itself. Chimeric nucleases are molecules composed of a sequence-specific domain fused to a nonspecific DNA cleavage molecule, and are capable of inducing double-strand breaks (DSBs) at specific sites in the genome (Gaj, Gersbach, & Barbas, 2013). Unlike SmaRT systems, chimeric nucleases cause permanent changes to the DNA of the target cells, which persist long after expression of the nucleases themselves has ceased.

Zinc finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) are two types of chimeric nucleases, each with a distinct class of customizable sequence-specific domain (Gaj et al., 2013). ZFNs have multiple zinc finger domains, structural motifs containing zinc ions, and a conserved ββα motif. Residues on the surface of the α-helix bind to three base pairs on the major groove of DNA, allowing each zinc finger domain to recognize a nucleotide triplet. ZFNs contain an array of zinc finger domains which together can recognize a specific sequence (Gaj et al., 2013). Each zinc finger domain recognizes a specific nucleotide triplet rather than a single base pair. As a result, the zinc finger array can bind to certain sequences with high specificity; however, only certain nucleotide triplets have corresponding zinc finger domains. Therefore, the number of sequences to which ZFNs can bind is limited (Osborn et al., 2013). TALENs, by contrast, are composed of an array of amino acid repeat domains, each of which recognizes only a single base pair. Therefore, TALENs can be engineered to recognize any DNA sequence (Osborn et al., 2013). However, TALENs are more expensive to make than ZFNs because the amino acid repeat domains are complex and difficult to synthesize (Gaj et al., 2013).

Both ZFNs and TALENs can be used to induce DSBs at specific loci with similar frequencies (Hockemeyer et al., 2011). The creation of DSBs can be used to excise sequences from the genome or to insert donor DNA. To remove a sequence (e.g. a harmful oncogene), DSBs can be induced at either end of the sequence and the free ends ligated, eliminating the DNA between the DSBs. To insert a sequence (e.g. a helpful tumour suppressor), the desired DNA sequence can be introduced to the cell along with a chimeric nuclease that makes a single DSB, allowing the donor sequence to be ligated into the DNA. These methods lead to specific and long-lasting genetic modification (Hockemeyer et al., 2011). However, TALENs and ZFNs are both extremely time- and resource-intensive.

Viral Vectors: High-Level Transgene Expression

Viruses proliferate by using the cellular machinery of host cells for the expression of viral proteins and replication of the viral genome. Engineered viruses present an efficient delivery system (see ‘Delivery Method’ below) as well as a high level of expression of transgenes (Waehler, Russell, & Curiel, 2007). Viral vectors can be designed by replacing portions of the viral DNA with the transgene of interest coupled with a promoter which will allow the transgene to be expressed once inside the host cell (Kootstra & Verma, 2003). To prevent the engineered virus from threatening healthy host cells, all portions of the viral genome which are not essential for infection of the host cells are deleted, including genes encoding the viral capsid and other virulence factors. Viral vectors can be used to induce short-term or long-term expression of transgenes, depending on the type of vector used (Kootstra & Verma, 2003). Retroviral vectors insert transgenes into the host genome, causing long-term expression of the genes. In contrast, adenoviral vectors do not integrate into host DNA, so the viral transgenes are expressed and replicated independently of the host genome. These non-chromosomal pieces of genetic material are referred to as ‘episomes’. Because these episomes degrade over a period of weeks to months (depending on the activity of DNA repair and destruction pathways), adenoviral vectors result in only short-term expression of transgenes (Kootstra & Verma, 2003). Short-term expression of transgenes may be useful in some cases, but cannot permanently correct harmful genetic defects. Long-term genetic modification of the host genome via retroviral vectors is one alternative, but poses significant risks: integration of the transgene into the genome can induce mutations at off-target sites in the genome, a process called ‘insertional mutagenesis’ (Wu & Dunbar, 2011). This can result in harmful side effects, including dysregulation of proto-oncogenes and tumour suppressors, as in the SCID-X1 trials.

CRISPR: Targeted Gene Regulation

Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) are a bacterial defense mechanism for destroying foreign DNA (Marraffini & Sontheimer, 2010). Currently CRISPR is the most attractive therapeutic agent available for gene editing and silencing in mammalian cells. It remains relatively specific and simple to design compared to other genome editing technologies such as TALENs and zinc finger nucleases. This technology requires only two types of DNA molecules which can be encoded on the same construct and introduced into a variety of hosts: a molecule that encodes Cas (CRISPR-associated system) endonuclease, whose role is to induce a double stranded break, as well as a molecule to encode guide RNA (gRNA) which guides the nuclease to target a specific DNA or RNA element. gRNAs are customizable towards each target, allowing for high versatility. Cas9 is a variety of Cas from the bacteria Streptococcus pyogenes (Wilkinson & Wiedenheft, 2014). Cas9 combined with a small gRNA that targets the promoter region of a gene can cause up to 100-fold repression (Qi et al., 2013). Several recent innovations, including converting Cas9 into a nickase enzyme and using truncated gRNAs, have further increased the specificity of the CRISPR/Cas9 system (Gori et al., 2015). As a result, this technique minimizes the risk of unwanted mutations in the target cells. In addition, the CRISPR/Cas9 system can be used to create targeted modifications at multiple different loci using only one transgene (Dow et al., 2015). The specificity, efficacy and versatility of CRISPR all contribute to the vast potential of this technique in clinical applications.

Part 2: Customizing the Delivery Method

Equally important to selecting the best therapeutic agent out of those available is the task of finding an effective delivery method for introducing each agent. The ideal method must be able to deliver the therapeutic agent to specific tissues where therapy is needed, but must have minimal toxic side effects. The methods explored in this section are summarized in Figure 1.

Liposomes

Liposomes are enclosed phospholipid bilayer structures which can encapsulate oligonucleotides or drugs for delivery. Their size and contents are highly customizable, and they can be targeted to a specific cell type through receptor-mediated endocytosis. Leakage, specificity of targeting, half-life, toxicity versus efficacy (therapeutic index), and success of delivery across the cell membrane are key issues which still need to be overcome in this delivery method (Allen & Cullis, 2013). In order to overcome physiological barriers to access tumours, the ideal liposome would minimize leakage, release its contents only in malignant tissues, have a long half-live in vivo, be non-toxic to healthy cells, and transfect cancer cells with a high degree of efficiency.

Injectable Hydrogels

Hydrogels are synthetic pockets formed by the assembly or crosslinking of hydrophilic polymers whose pores may be used to carry therapeutic agents. As a delivery system hydrogels can improve vector transfer into specific cells in vivo because they can release their contents at a rate controlled by the speed of diffusion through the hydrogel (Caló & Khutoryanskiy, 2014). Hydrogel encasement increases vector stability, shields vectors from immune effects, and modulates the length and location of vector delivery (Seidlits, Gower, Shepard, & Shea, 2014). Though their versatility is a strength, the exact properties of the hydrogel system must be fine-tuned and tested before each application to ensure maximum efficiency. These features include but are not limited to pore size, hydrogel shape, charge, pH, biomimetic properties, architecture, and degradation time (Caló & Khutoryanskiy, 2014).

Viral Vector Delivery Systems

Pathogenic viruses have evolved to efficiently infect, or ‘transduce’, target cells (Kootstra & Verma, 2003). Viruses can be engineered to deliver therapeutic agents such as toxic genes or CRISPR, a strategy which is discussed later in this review. The transduction efficiency of viral delivery systems is very high compared to non-viral methods of gene delivery such as electroporation or tissue particle bombardment, but viruses also pose significant risks (Kootstra & Verma, 2003). To minimize these risks, it is necessary to target viral delivery systems to diseased cells. They will infect cells based on the tropism, or affinity for a given cell type, of the virus (Waehler et al., 2007). Viral envelopes can be engineered in a variety of different ways to target desired tissues. Gene transfer using viral vectors can occur in vivo, with the vectors either applied locally or introduced systemically. However, viruses may also be applied to stem cells which are then reintroduced to the patient (see ‘cell-based delivery’ below; Waehler et al., 2007).

Electroporation

Electroporation is a method to introduce aqueous pores into cell membranes by exposing cells to pulsing electric fields (Yarmush, Golberg, Serša, Kotnik, & Miklavčič, 2014). Pores in the membrane bilayer typically remain open for milliseconds but may last for several minutes (Yarmush et al., 2014). These aqueous pores allow passage of genetic material through the membrane bilayer, transfecting the targeted tissues (Kalli, Teoh, & Leen, 2014). The electric fields required for electroporation can be produced by externally applied electrodes, and have been shown to cause no long-term side effects, though in the short term patients experience pain and muscle contractions. Another potential downside is that the transfection efficiency of electroporation varies depending on the type of tissue, and is generally lower than the transduction efficiency achieved by viral vectors (Yarmush et al., 2014).

Figure 1. Strategies for delivering therapeutic genes to target areas. Systemic delivery systems include viral vectors, liposomes, and hydrogels, all of which are able to selectively target malignant tissues without harming healthy cells. These delivery systems travel throughout the body and can therefore kill small metastases as well as the primary tumour. Local delivery systems include tissue particle bombardment and electroporation, which may have less severe side effects than systemic delivery systems but cannot target smaller tumours. Cell-based systems confer long-lasting genetic changes which can protect healthy tissues from the harmful effects of chemotherapy and radiotherapy.

Figure 1. Strategies for delivering therapeutic genes to target areas. Systemic delivery systems include viral vectors, liposomes, and hydrogels, all of which are able to selectively target malignant tissues without harming healthy cells. These delivery systems travel throughout the body and can therefore kill small metastases as well as the primary tumour. Local delivery systems include tissue particle bombardment and electroporation, which may have less severe side effects than systemic delivery systems but cannot target smaller tumours. Cell-based systems confer long-lasting genetic changes which can protect healthy tissues from the harmful effects of chemotherapy and radiotherapy.

Tissue particle bombardment

Another method of transfecting cells is tissue particle bombardment, also called the ‘gene gun’. Plasmids containing recombinant genes for transfection are coated onto inert gold particles 0.5-3µm in diameter, which are then accelerated to high speeds using either electrodes or pressurized helium (Kitagawa, Iwazawa, Robbins, Lotze, & Tahara, 2003; Yang, Burkholder, Roberts, Martinell, & McCabe, 1990). The particles are then fired at cells at high velocity, allowing them to penetrate the cell membranes and deliver the recombinant genes into the cytoplasm. The size, density and velocity of the particles can be adjusted to allow them to penetrate different types of tissue at different depths, ensuring that only the targeted cells will be transfected (Yang et al., 1990). The transfection efficiency of tissue particle bombardment is very low and usually leads to only transient expression (Kitagawa et al., 2003). This is because the genes carried by the gold particles are not incorporated by the genome but are instead expressed as episomes, which are degraded over time (Kitagawa et al., 2003).

Cell-based delivery

One method of ensuring lasting, specific expression of a therapeutic agent is to use stem cells as a method of vector delivery (Sorrentino, 2002). In stem cell gene therapy, adult or embryonic stem cells are isolated in vitro and transfected or transduced with the genes of interest, then introduced into the patient. The stem cells used can be autologous (i.e. isolated from and reintroduced to the same patient), or allogeneic, (i.e. obtained from a different donor) (Wu & Dunbar, 2011). Engineering isolated cells in vitro is generally easier than attempting to transfect or transduce cells in vivo. Once introduced, all cell lineages that derive from the transgenic cells will carry the genes of interest. Because stem cells are capable of proliferating indefinitely, the transgenic cells will continue to deliver the gene therapy for the remainder of the patient’s life (Sorrentino, 2002). This makes it possible to permanently correct disease phenotypes; however, the permanent nature of the stem cells also poses significant risks. If the transgenic stem cells acquire unwanted mutations, they can proliferate unsustainably and form tumours (Wu & Dunbar, 2011). Once reintroduced to the patient, engineered stem cells could be very difficult to eliminate in case of harmful side effects. However, these risks can be mitigated by implementing improved protocols for the engineering and screening of stem cells in vitro (Wu & Dunbar, 2011). Rapid advancements in the field of induced pluripotent stem cells and tissue-specific stem cells are making it increasingly possible to engineer stem cells in vitro effectively and without causing unwanted side effects.

Gene Therapy to Target Cancer

Cancer is a difficult disease to treat for several reasons: malignant cells live amongst non-cancerous and often essential tissues; they can metastasize to relocate to other parts of the body; and most forms of cancer have some degree of resistance towards current treatments (Hanahan & Weinberg, 2011) In order to overcome these challenges, a therapy must be chosen which allows for the selective targeting of cancer cells over healthy cells. Ideally, this method could be administered systemically rather than locally, so as to counter metastases in addition to the primary tumour. In addition, it is important to use therapies to which cancer cells cannot easily develop a high degree of resistance. Notably, malignant cells exhibit a range of genetic and phenotypic abnormalities which can allow them to be selectively targeted using the gene therapy toolkit (McCormick, 2001). Several gene therapy techniques aimed at destroying cancer cells are under development. The next section of this review will draw upon the techniques described in the “Tools for Gene Therapy” section to outline potential combinations of therapeutic agent and delivery methods that have been successful or may be successful in the future. These methods can be grouped generally into two main categories: introducing genes toxic to cancer cells, such as tumour suppressor genes; and shutting off oncogene expression in malignant tissues, as shown in Figure 2.

Killing Tumor Cells

One method of killing cancer cells without harming healthy tissues is to use viral vectors, liposomes, or hydrogels to selectively deliver a toxic gene. Exciting results have been achieved using viral vectors to exploit different vulnerabilities in cancer cells. For instance, Gendicine, the first gene therapy in the world to be approved for clinical use, is an adenoviral vector which delivers a recombinant tumour suppressor gene (Chen et al., 2014; Pearson, Jia, & Kandachi, 2004). Inactivation of the tumour suppressor, p53, is a critical step in the development of many types of cancers (Chen et al., 2014; Lang et al., 2003). Therefore, application of Gendicine can be used to reactivate p53 and induce programmed cell death in cancer cells (Chen et al., 2014). Additionally, researchers found that Shigatoxin1A1 adenovirus vectors could be used to deliver gene products that are trans-spliced together (using SmaRT technology) inside of cells to encode a gene with a promoter activated only in cancerous cells (Nakayama, Pergolizzi, & Crystal, 2005). The trans-spliced mRNA encoded a toxin which induced apoptosis in cancer cells (Nakayama et al., 2005). Since the viral vector delivered two DNA fragments coding for 5′ and 3′ fragments of pre-mRNA of Shigatoxin1A1 instead of one DNA fragment coding for the entire gene, vectors could be grown in cells without the toxin killing the hosts (Nakayama et al., 2005).

Liposomes have fewer side effects than viruses, but are also less stable. Suzuki et al (2010) used liposomes to deliver the immunotherapeutic cytokine gene Interleukin-12 (IL-12) to tumour sites. The transgene was introduced systemically within ‘bubble liposomes’, which can be collapsed by ultrasound. Tumour cells were sonicated to collapse the liposomes in the area surrounding the malignant cells, ensuring that the therapeutic gene was introduced to the tumour specifically. The treatment caused cancerous tissue to experience inhibited protein production and inhibited growth (Suzuki et al., 2010). This is an example of a ‘remote triggering system’ which allows liposomes to release their contents only when near the tumour; other methods include light-sensitive and magnetically-responsive liposomes (Suzuki et al., 2010). However, liposomes normally have low efficacy because they degrade easily and often release their contents before reaching the tumour cells (Mufamadi et al., 2011).

New gene therapy techniques have also been applied to the challenge of treating cancer and have produced promising results. For example, hydrogels have been used to deliver therapeutic genes. In order to treat bone cancer, a thermal and pH sensitive chitosan hydrogel has been reported to successfully treat mice by delivering the small molecule Doxorubicin (Ta, Dass, Larson, Choong, & Dunstan, 2009). Doxorubicin can be toxic to healthy cells as well as malignant ones, but the advantage of the hydrogel was that different formulations could be composed to tailor the release of this chemical at cancerous tissues (Ta et al., 2009). In future trials, hydrogels could represent an ideal replacement for viral delivery because of their low toxicity (Seidlits et al., 2014). One attractive proposal would be to incorporate liposomes inside a hydrogel. This formulation has been shown to allow a controlled, yet stable drug release (Mufamadi et al., 2011).

Lastly, gene therapy can be used in conjunction with either chemotherapy or radiotherapy to improve their efficacy. An example of this is delivery of genetically modified hematopoietic stem cells (HSCs). HSCs in the bone marrow are often killed off by chemotherapy, causing severe side effects for the patient (Sorrentino, 2002). By introducing genetically modified, chemotherapy-resistant HSCs, the dose of chemotherapy can safely be increased. A recent study used this method to target glioblastomas, cancers which are often resistant to chemotherapy (Adair et al., 2012). Three patients were treated with autologous hematopoietic stem cells transduced with a retroviral vector to overexpress the gene P140K, which confers resistance to the chemotherapeutic drug O6-benzylguanine (6BG). The number of hematopoietic stem cells in all three patients dropped after each cycle of chemotherapy; however, each decrease was followed by an increase in the stem cell count, demonstrating that the population of hematopoietic stem cells was able to recover. An increase in P140K-modified cells was observed following each cycle of chemotherapy, suggesting that the introduction of P140K was responsible for conferring chemoprotection, demonstrating the potential of this combinatorial approach (Adair et al., 2012).

Figure 2. Methods of selectively killing cancer cells. Cancer-causing mutations can be broadly grouped into two categories: mutations in oncogenes and mutations in tumour suppressors. Malignant cells with active oncogenes can be targeted by using viral vectors deliver CRISPR/Cas9, TALENs or ZFNs, which then excise or silence the relevant oncogenes. Viral vectors can also be used to deliver tumour suppressor genes such as p53 which are harmless to healthy cells but toxic to cancer cells. Liposomes can deliver toxic genes directly to cancer cells through the use of remote-triggering systems; similarly, hydrogels can provide a controlled release of therapeutic genes once inside malignant tissues.

Figure 2. Methods of selectively killing cancer cells. Cancer-causing mutations can be broadly grouped into two categories: mutations in oncogenes and mutations in tumour suppressors. Malignant cells with active oncogenes can be targeted by using viral vectors deliver CRISPR/Cas9, TALENs or ZFNs, which then excise or silence the relevant oncogenes. Viral vectors can also be used to deliver tumour suppressor genes such as p53 which are harmless to healthy cells but toxic to cancer cells. Liposomes can deliver toxic genes directly to cancer cells through the use of remote-triggering systems; similarly, hydrogels can provide a controlled release of therapeutic genes once inside malignant tissues.

Targeting Oncogene Expression

Another important aim in developing cancer therapy is to knock out mutant oncogenes by repressing their expression or removing them from the genome. Gene therapy treatments for correcting genetic diseases offer interesting insight.

Several of the gene therapy tools designed to alter harmful genes have only been tested in animal models but have significant therapeutic potential. The first demonstration of CRISPR as a therapeutic strategy in vivo corrected a diseased phenotype in a mouse model of hereditary tyrosinemia, a disease caused by error of metabolism (Yin et al., 2014). A gene coding for the Cas9 nuclease, a single guide RNA, and a donor oligonucleotide were injected into the tail of the mouse, resulting in reconstitution of the wild type gene in mouse hepatocytes. Only about 0.4% of cells were corrected, a result which was likely due to low transfection efficiencies (Yin et al., 2014). Preclinical trials involving other methods of genome editing have also been successful. A novel combination of techniques was recently used to tackle recessive dystrophic epidermolysis bullosa, a disease caused by a single point mutation to the COL7A1 gene on chromosome three (Osborn et al., 2013). This point mutation usually results in a lack of type VII collagen protein, which leads to fatal skin blistering. Osborn et al. (2013) showed that co-delivery of TALEN DNA and an oligonucleotide donor to patient fibroblasts was able to rescue type VII collagen production. Further, the gene-edited fibroblasts were induced to return to the pluripotent state and then injected into mice, where they went on to produce skin-like structures as healthy, differentiated cells would (Osborn et al., 2013).

The new genome editing methods using TALENs and the CRISPR/Cas9 system show promise in pre-clinical trials when combined with viral vectors for delivery. Adenoviral vectors can carry a large enough amount of genetic material (‘genetic payload’) to introduce DNA encoding for TALENS or CRISPR oligonucleotides into the target cells, allowing cells to be transduced with much greater efficiency. A recent study has shown that adenoviral vectors can be used to transduce a range of human cells in vitro with genes coding for Cas9 nuclease and single guide RNA molecules, resulting in genome modifications at the targeted loci (Maggio et al., 2014). These modifications were achieved with an efficiency of 18-65 percent, depending on the tissue type. Adenoviral vectors have also been used to successfully achieve gene transfer of TALENs into human cells in vitro (Holkers et al., 2013)

Discussion

Gene therapy has incredible potential for treating cancer because it can be used to target cancer cells on the basis of genetic defects rather than rapid proliferation, making treatments much more specific to tumour cells. This means that gene therapy could be used to treat cancer without the drastically harmful side effects of chemotherapy and radiotherapy, and can destroy metastases and micro-tumours which cannot be removed surgically. However, emerging gene therapies must overcome significant challenges before they can be used to treat cancer patients on a large scale.

Early gene therapy trials such as the SCID-XI trial in 2000 have called into question the safety of gene therapy, particularly the use of retroviral vectors (Wu & Dunbar, 2011). The failure of the SCID-XI trial showed that retroviral vectors can cause off-target mutations in the host cells, leading to uncontrolled replication and tumour formation. This is a concern for some applications of gene therapy to treat cancer, particularly the use of drug-resistant stem cells to mitigate the harmful effects of chemotherapy: if the modified stem cells start proliferating at an uncontrolled rate, the patient will experience the formation of a new, drug-resistant tumour (Wu & Dunbar, 2011). The study by Adair et al. (2012) successfully used retroviral vectors to transduce hematopoietic stem cells and confer a drug-resistance gene. To reduce the risk of mutagenesis, retroviral vectors could be replaced by adenoviral vectors carrying a Cas9 nuclease and gRNA. This approach would combine the high transduction efficiency of viral vectors with the specificity of CRISPR, thereby preventing unwanted mutations which could lead to harmful results.

Another risk of viral vectors is that genetically engineered viruses can trigger an immune response in the patient that destroys the therapeutic vectors. Tait et. al. (1999) showed that viral vectors can be used to destroy tumours, but only if the vectors can evade the immune system long enough to infect the cancer cells. Therefore, viral vectors can be a useful tool for treating immunocompromised patients such as those who have undergone many rounds of chemotherapy. Patients with healthy immune systems may benefit more from alternate methods of gene delivery, such as liposomes, tissue particle bombardment or electroporation. These methods can be used to deliver vectors without triggering an immune response, but are much less efficient than viral vectors. One promising method of increasing the efficiency of liposomes is to combine hydrogels and liposomes to make the vectors more stable. Similarly, hydrogels could potentially be used in conjunction with electroporation to ensure that the vectors transfect as many cells as possible. Electroporation and tissue particle bombardment are most useful for local application of gene therapy, whereas liposomes could be used for systemic application.

Previous experiments with liposomes, electroporation and tissue particle bombardment have mainly been conducted with these delivery methods transferring DNA constructs which are expressed as episomes (Mufamadi et al., 2011; Yang et al., 1990; Yarmush et al., 2014). Therefore, these trials have encountered the issue of transiency: once transfected, the host cells only express the transgene for a short amount of time. This can be a problem if, for example, the aim is to permanently shut down expression of an oncogene. One solution to this problem could be to combine these delivery methods with CRISPR or TALENs, which edit the genomic DNA of the host cells and therefore can induce long-lasting gene expression or knockdown. Yin et al. (2014) demonstrated that while CRISPR can be used to correct gene expression, simply injecting the Cas9 nuclease, guide RNA, and donor oligonucleotide did not result in a high enough transfection efficiency to treat disease in humans. Delivering the necessary components with liposomes, however, could increase the transfection efficiency enough to effectively correct disease phenotypes. The combination of delivery methods such as liposomes and hydrogels with therapeutic agents such as CRISPR and TALENs has the potential to be a powerful tool for gene therapy. Alternatively, combining technologies such as CRISPR with viral delivery systems could result in a highly specific and efficient method for genome editing.

Conclusions

Gene therapy is a relatively new but very promising solution to one of the most important challenges in the field of biotechnology today: the treatment of cancer. Using the gene therapy toolbox outlined in this review, it is possible to specifically target cancer cells by delivering toxic genes to tumour cells, or by altering the expression of oncogenes. Many of the methods in the toolbox have been demonstrated in clinical or preclinical trials to be specific, effective and long lasting inside the complex system of the body. These trials have shown that these methods already have a great deal of control over the types of genes that can be modified and how those genes can be modified. However, technologies such as CRISPR and SmaRT must still be put to the test in clinical trials. There are many barriers still to be overcome before gene therapy can become successful and easily applied to disease. With time, perhaps gene therapy can become a healthier alternative or complement to the radiotherapy and chemotherapy that is currently the primary cancer fighting tactic.

References

Adair, J. E., Beard, B. C., Trobridge, G. D., Neff, T., Rockhill, J. K., Silbergeld, D. L., Mrugala, Maciej M., Kiem, H.-P. (2012). Extended survival of glioblastoma patients after chemoprotective HSC gene therapy. Science Translational Medicine, 4(133), 133ra57.

Allen, T. M., & Cullis, P. R. (2013). Liposomal drug delivery systems: from concept to clinical applications. Advanced Drug Delivery Reviews, 65(1), 36–48.

American Cancer Society. (2016). Cancer facts & figures 2016.

Braun, C. J., Boztug, K., Paruzynski, A., Witzel, M., Schwarzer, A., Rothe, M., Modlich, U., Beier, R., Göhring, G., Steinemann, D., Fronza, R., Ball, C. R., Haemmerle, R., Naundorf, S., Kühlcke, K., Rose, M., Fraser, C., Mathias, L., Ferrari, R., Abboud, M. R. Al-Herz, W., Kondratenko, I., Maródi, L., Glimm, H., Schlegelberger, B., Schambach, A., Albert, M. H., Schmidt, M., von Kalle, C., Klein, C. (2014). Gene therapy for Wiskott-Aldrich syndrome–long-term efficacy and genotoxicity. Science Translational Medicine, 6(227), 227ra33.

Caló, E., & Khutoryanskiy, V. V. (2014). Biomedical applications of hydrogels: a review of patents and commercial products. European Polymer Journal, 65, 252–267.

Cartier, N., Hacein-Bey-Abina, S., Bartholomae, C. C., Bougnères, P., Schmidt, M., Kalle, C. V., Fischer, A., Cavazzana-Calvo, M., Aubourg, P. (2012). Lentiviral hematopoietic cell gene therapy for X-linked adrenoleukodystrophy. Methods in Enzymology, 507, 187–98.

Chen, G.X., Zhang, S., He, X.H., Liu, S.Y., Ma, C., & Zou, X.P. (2014). Clinical utility of recombinant adenoviral human p53 gene therapy: current perspectives. OncoTargets and Therapy, 7, 1901–9.

Dow, L. E., Fisher, J., O’Rourke, K. P., Muley, A., Kastenhuber, E. R., Livshits, G., Tschaharganeh, D. F., Socci, N. D., Lowe, S. W. (2015). Inducible in vivo genome editing with CRISPR-Cas9. Nature Biotechnology, 33(4), 390–394.

Gaj, T., Gersbach, C. A., & Barbas, C. F. (2013). ZFN, TALEN, and CRISPR/Cas-based methods for genome engineering. Trends in Biotechnology, 31(7), 397–405.

Gori, J. L., Hsu, P. D., Maeder, M. L., Shen, S., Welstead, G. G., & Bumcrot, D. (2015). Delivery and Specificity of CRISPR-Cas9 Genome Editing Technologies for Human Gene Therapy. Human Gene Therapy, 26(7), 443–51.

Hacein-Bey-Abina, S., Garrigue, A., Wang, G. P., Soulier, J., Lim, A., Morillon, E., Clappier, E., Caccavelli, L., Delabesse, E., Beldjord, K., Asnafi, V., MacIntyre, E., Dal Cortivo, L., Radford, I., Brousse, N., Sigaux, F., Moshous, D., Hauer, J., Borkhardt, A., Belohradsky, B. H., Wintergerst, U., Velez, M. C., Leiva, L., Sorensen, R., Wulffraat, N., Blanche, S., Bushman, F. D., Fischer, A., Cavazzana-Calvo, M. (2008). Insertional oncogenesis in 4 patients after retrovirus-mediated gene therapy of SCID-X1. The Journal of Clinical Investigation, 118(9), 3132–42.

Hanahan, D., & Weinberg, R. A. (2011). Hallmarks of cancer: the next generation. Cell, 144(5), 646–74.

Hockemeyer, D., Wang, H., Kiani, S., Lai, C. S., Gao, Q., Cassady, J. P., Cost, G. J., Zhang, L., Santiago, Y., Miller, J. C., Zeitler, B., Cherone, J. M., Meng, X., Hinkley, S. J., Rebar, E. J., Gregory, P. D., Urnov, F. D., Jaenisch, R. (2011). Genetic engineering of human pluripotent cells using TALE nucleases. Nature Biotechnology, 29(8), 731–4.

Holkers, M., Maggio, I., Liu, J., Janssen, J. M., Miselli, F., Mussolino, C., Recchia, A., Cathomen, T., Gonçalves, M. A. F. V. (2013). Differential integrity of TALE nuclease genes following adenoviral and lentiviral vector gene transfer into human cells. Nucleic Acids Research, 41(5), e63.

Kaliberov, S. A., & Buchsbaum, D. J. (2012). Chapter seven–Cancer treatment with gene therapy and radiation therapy. Advances in Cancer Research, 115, 221–63.

Kalli, C., Teoh, W. C., & Leen, E. (2014). Introduction of Genes via Sonoporation and Electroporation. Advances in Experimental Medicine and Biology, 818, 231–54.

Kitagawa, T., Iwazawa, T., Robbins, P. D., Lotze, M. T., & Tahara, H. (2003). Advantages and limitations of particle-mediated transfection (gene gun) in cancer immuno-gene therapy using IL-10, IL-12 or B7-1 in murine tumor models. The Journal of Gene Medicine, 5(11), 958–65.

Kootstra, N. A., & Verma, I. M. (2003). Gene therapy with viral vectors. Annual Review of Pharmacology and Toxicology, 43, 413–39.

Lang, F., Bruner, J., Fuller, G., Aldape, K., Prados, M., Chang, S., Berger, M., McDermott, M., Kunwar, S., Junck, L., Chandler, W., Zwiebel, J., Kaplan, R., Yung, A. (2003). Phase I Trial of Adenovirus-Mediated p53 Gene Therapy for Recurrent Glioma: Biological and Clinical Results. Journal of Clinical Oncology : Official Journal of the American Society of Clinical Oncology, 21(13), 2508–2518.

Maggio, I., Holkers, M., Liu, J., Janssen, J. M., Chen, X., & Gonçalves, M. A. F. V. (2014). Adenoviral vector delivery of RNA-guided CRISPR/Cas9 nuclease complexes induces targeted mutagenesis in a diverse array of human cells. Scientific Reports, 4, 5105.

Malik, P., & Arumugam, P. I. (2005). Gene Therapy for beta-thalassemia. Hematology / the Education Program of the American Society of Hematology. American Society of Hematology. Education Program, 45–50.

Marraffini, L. A., & Sontheimer, E. J. (2010). CRISPR interference: RNA-directed adaptive immunity in bacteria and archaea. Nature Reviews. Genetics, 11(3), 181–90.

McCormack, M. P., Young, L. F., Vasudevan, S., de Graaf, C. A., Codrington, R., Rabbitts, T. H., Jane, S., Curtis, D. J. (2010). The Lmo2 oncogene initiates leukemia in mice by inducing thymocyte self-renewal. Science (New York, N.Y.), 327(5967), 879–83.

McCormick, F. (2001). Cancer gene therapy: fringe or cutting edge? Nature Reviews. Cancer, 1(2), 130–41.

Mufamadi, M. S., Pillay, V., Choonara, Y. E., Du Toit, L. C., Modi, G., Naidoo, D., & Ndesendo, V. M. K. (2011). A review on composite liposomal technologies for specialized drug delivery. Journal of Drug Delivery, 2011, 939851.

Nakayama, K., Pergolizzi, R. G., & Crystal, R. G. (2005). Gene transfer-mediated pre-mRNA segmental trans-splicing as a strategy to deliver intracellular toxins for cancer therapy. Cancer Research, 65(1), 254–63.

Osborn, M. J., Starker, C. G., McElroy, A. N., Webber, B. R., Riddle, M. J., Xia, L., DeFeo, A. P., Gabriel, R., Schmidt, M., von Kalle, C., Carlson, D. F., Maeder, M. L., Joung, J. K., Wagner, J. E., Voytas, D.F., Blazar, B. R., Tolar, J. (2013). TALEN-based gene correction for epidermolysis bullosa. Molecular Therapy : The Journal of the American Society of Gene Therapy, 21(6), 1151–9.

Pearson, S., Jia, H., & Kandachi, K. (2004). China approves first gene therapy. Nature Biotechnology, 22(1), 3–4.

Qi, L. S., Larson, M. H., Gilbert, L. A., Doudna, J. A., Weissman, J. S., Arkin, A. P., & Lim, W. A. (2013). Repurposing CRISPR as an RNA-Guided Platform for Sequence- Specific Control of Gene Expression, 152(5), 1173–1183.

Seidlits, S. K., Gower, R. M., Shepard, J. A., & Shea, L. D. (2014). Hydrogels for Lentiviral Gene Delivery. Expert Opinions on Drug Delivery, 10(4), 499–509.

Siegel, R. L., Miller, K. D., & Jemal, A. (2015). Cancer statistics, 2015. CA: A Cancer Journal for Clinicians, 65(1), 5–29.

Sorrentino, B. P. (2002). Gene therapy to protect haematopoietic cells from cytotoxic cancer drugs. Nature Reviews. Cancer, 2(6), 431–41.

Suzuki, R., Namai, E., Oda, Y., Nishiie, N., Otake, S., Koshima, R., Hirata, K., Taira, Y., Utoguchi, N., Negishi, Y., Nakagawa, S., Maruyama, K. (2010). Cancer gene therapy by IL-12 gene delivery using liposomal bubbles and tumoral ultrasound exposure. Journal of Controlled Release : Official Journal of the Controlled Release Society, 142(2), 245–50.

Ta, H. T., Dass, C. R., Larson, I., Choong, P. F. M., & Dunstan, D. E. (2009). A chitosan-dipotassium orthophosphate hydrogel for the delivery of Doxorubicin in the treatment of osteosarcoma. Biomaterials, 30(21), 3605–13.

Tait, D. L., Obermiller, P. S., Hatmaker, A. R., Redlin-frazier, S., & Holt, J. T. (1999). Ovarian Cancer BRCA1 Gene Therapy : Phase I and II Trial Differences in Immune Response and Vector Stability Ovarian Cancer BRCA1 Gene Therapy : Phase I and II Trial Differences in Immune Response and Vector Stability 1. Clinical Cancer Research, 5, 1708–1714.

Urruticoechea, A., Alemany, R., Balart, J., Villanueva, A., Viñals, F., & Capellá, G. (2010). Recent advances in cancer therapy: an overview. Current Pharmaceutical Design, 16(1), 3–10.

Waehler, R., Russell, S. J., & Curiel, D. T. (2007). Engineering targeted viral vectors for gene therapy. Nature Reviews. Genetics, 8(8), 573–87.

Wilkinson, R., & Wiedenheft, B. (2014). A CRISPR method for genome engineering. F1000prime Reports, 6, 3.

Wu, C., & Dunbar, C. E. (2011). Stem cell gene therapy: the risks of insertional mutagenesis and approaches to minimize genotoxicity. Frontiers of Medicine, 5(4), 356–71.

Yang, N. S., Burkholder, J., Roberts, B., Martinell, B., & McCabe, D. (1990). In vivo and in vitro gene transfer to mammalian somatic cells by particle bombardment. Proceedings of the National Academy of Sciences of the United States of America, 87(24), 9568–72.

Yang, Y., & Walsh, C. E. (2005). Spliceosome-mediated RNA trans-splicing. Molecular Therapy : The Journal of the American Society of Gene Therapy, 12(6), 1006–12.

Yarmush, M. L., Golberg, A., Serša, G., Kotnik, T., & Miklavčič, D. (2014). Electroporation-Based Technologies for Medicine: Principles, Applications, and Challenges. Annual Review of Biomedical Engineering, 16, 295–320.

Yin, H., Xue, W., Chen, S., Bogorad, R. L., Benedetti, E., Grompe, M., Koteliansky, V., Sharp, P. A., Jacks, T., Anderson, D. G. (2014). Genome editing with Cas9 in adult mice corrects a disease mutation and phenotype. Nature Biotechnology, 32(6), 551–3.

Access Full PDF: Gene Therapy: A Toolkit for Targeting Cancer

Got Ink? An Analysis of Personality Traits between Tattooed and Non-Tattooed Individuals

Abstract | Introduction | Tools for Gene Therapy | Gene Therapy to Target Cancer | Discussion | Conclusions |Acknowledgements |
References |PDF

Abstract

Research supports personality differences between tattooed and non-tattooed individuals. However, few studies have investigated whether any of these differences are associated with positive indicators for tattooed individuals. In this study, personality differences between tattooed and non-tattooed individuals in three of the Big Five personality areas considered critical to successful employees in the workforce were examined. Previous research has established that higher levels of conscientiousness and extraversion coupled with lower levels of neuroticism are indicators of high-quality employees. The present study attempts to augment this line of research by adding the dimension of tattoos; investigating whether individuals with tattoos report more positive personality indicators in these dimensions than individuals without tattoos. Thus it was hypothesized that tattooed individuals would report higher levels of conscientiousness and extraversion and lower levels of neuroticism than non-tattooed individuals. For this purpose, N = 521 individuals completed an online survey, which included the 44-Question Big Five Inventory. An independent sample t-test revealed a statistically significant difference between tattooed (M = 3.41, SD = 0.77) and non-tattooed (M = 3.21, SD = 0.83) groups in the Big Five personality area of extraversion, t (521) = 0.39, p = .004, d = 0.25. There were no other statistically significant differences. These findings indicate that tattooed individuals may be better employees than previously believed, as the extraversion component of the Big Five Inventory, has been found to be a critical indicator of successful job performance.

Introduction

Tattoos have increased in popularity over the last two decades; almost one in five people across all age groups had a tattoo as of 2012, and one in ten people have two or more tattoos (Swami et al., 2012). Nearly 40% of young adults (18-25) have at least one tattoo, whereas only 15-16% of members of this age group in 1990 were tattooed (Swami et al., 2012). Despite the increase in tattoos within younger generations, tattooed individuals face discrimination, negative stigma, and lower levels of employment than their non-tattooed counterparts (Horne, Knox, Zusman, & Zusman, 2007). Very little research has examined whether individuals with tattoos score differently than non-tattooed individuals on scales measuring personality traits perceived as positive. This study seeks to address this gap by identifying personality differences between tattooed and non-tattooed individuals and the potential implications of those differences for employment.

Historically, the traits associated with tattooed individuals have depended significantly on the culture and circumstances of those individuals. Captain Cook explored Polynesia in 1769 and observed the social and spiritual significance of tattoos in Polynesian culture. The location of a tattoo on an individual’s body and the specific tattoo design displayed social, hierarchal, and genealogical information about the owner of the tattoo, as well as signaling particular aspects of his or her character (Parry, 1933). Tattooing was considered a sacred ceremony, and most tattoos were thought to fetch spiritual power, protection, and strength. Almost every Polynesian individual had tattoos, and many of Captain Cook’s men left their voyage with a permanent memento of their expedition, which was considered a great honor (Parry, 1933). Similarly, Native Americans report a long and extensive history of traditional tattoos. Depending on the tribe, tattoos could signal hierarchy or a specific role within the tribe, mark a warrior’s prowess in battle, or be considered marks of beauty (Littell, 2003).

Since then, through the shift towards Western culture and through changing definitions of art, tattoos have become more associated with criminals and the sexually promiscuous (Wohlrab, Fink, & Kappeler, 2005). Recent studies have shown there are still many stereotypes attached to individuals with tattoos: academic struggle, broken homes, traumatic childhoods, rarely or never attending church, poor decision-making skills, and susceptibility to peer pressure (Roberts & Ryan, 2002). However, these stereotypes may not accurately represent the current tattoo climate. Forty percent of 26 to 40-year-olds now have a tattoo, closely followed by 36% of 18 to 25-year-olds (Swami et al., 2012). The rising popularity of tattoos among young to middle aged individuals suggests that tattoos may hold different significance sociologically, biologically, and socially than they have throughout the previous century (Wohlrab et al., 2005).

Research is mixed on whether the negative stereotypes associated with tattoos are accurate. A study completed in 2007 in Germany evaluating tattooed and non-tattooed individuals using a Big Five Personality Inventory found that tattooed individuals scored higher on the subscale of extraversion, and lower on the subscale of neuroticism (Wohlrab, 2007). More recently, a 2012 study of 540 individuals from Austria and Germany examined Big Five personality traits in participants, as well as a need for uniqueness, sensation seeking, self-esteem, religious and spiritual belief, and demographic variables. The researchers in this study concluded that not only do those with tattoos have higher levels of need for uniqueness, sensation seeking, and thrill and adventure seeking, but they have lower levels of self-esteem, attend religious services less, and are generally much less educated than individuals who did not have tattoos (Swami et al., 2012).

For decades, businesses have attempted to identify personality traits that predict a successful employee. When United States federal law banned the use of polygraphs for employee selection in 1988, hirers began using personality surveys as the primary method for making hiring decisions (Stabile, 2013). Job interviewers now ask questions designed to reveal components of an individual’s personality in order to evaluate where that individual would best fit within the company structure, how committed to the job the individual would be, and their likelihood of advancing through the company ranks (Wohlrab, 2007). However, studies as late as 2010 have shown that despite this shift to personality-based hiring, companies still discard potential employees on the basis of their tattoos (Burgess, & Clark 2010).

Researchers have also attempted to determine personality traits capable of predicting successful employees. A 2014 ten-year longitudinal study of over 8,000 individuals working within multiple big business companies revealed that there is a significant statistical difference between the managerial and working classes in three Big Five personality dimensions: neuroticism, extraversion, and conscientiousness (Palaiou & Furnham, 2014). Conscientiousness was shown to be the best predictor of overall successful job performance and individuals who scored higher in this dimension tended to be more achievement oriented (Li, Barrick, Zimmerman, & Chiabaru, 2014). Neuroticism successfully predicted poor work performance; the lower the levels of neuroticism, the higher the level of performance from the employee (Barrick, Mount, & Judge, 2001). Finally, higher levels of extraversion were linked to higher levels of task performance and proactivity (Pearsall & Ellis, 2006).

This study attempts to augment the area of research pertaining to tattooed individuals’ personality traits by investigating whether tattooed individuals differ significantly when compared to their non-tattooed peers in areas related to successful employee traits. It was hypothesized that tattooed individuals would score higher in conscientiousness and extraversion and lower in neuroticism as measured by the Big Five Inventory.

Methods

Participants

Participants were recruited through a campus-wide e-mail at Whitworth University, Facebook psychology groups, and global online psychology research forums. Participation was entirely voluntary, and participants could complete the study on their own time at their own pace. 521 individuals completed the survey, 411 females and 110 males, aged from 18 to 62 years old.

Materials

Participants completed an online version of the 44-Question Big Five Inventory (John, Donahue, & Kentle, 1991) followed by basic demographic questions addressing age, sex, education level, and university affiliation of the participant. Participants were also asked if they had any tattoos. Participants with tattoos were asked to indicate the size and location of those tattoos.

The survey measured the Big Five areas of personality: openness to experience, neuroticism, extraversion, conscientiousness, and agreeableness. For example, questions measuring conscientiousness asked the participant to rate statements such as: “I am someone who does a thorough job” or “I am a reliable worker” on a five-point Likert scale. Items measuring neuroticism stated, “I am someone who remains calm in tense situations” and “I am someone who is emotionally stable, not easily upset”. Finally, items related to extraversion included statements such as “I am someone who is talkative” and “I am someone who is full of energy” (John et al., 1991). Participants were asked to rate their agreement with a series of such statements on a five-point Likert on a scale of one (“strongly disagreeing”) to five (“strongly agreeing”). The Big Five Inventory has scored between 0.73 – 0.82 on Cronbach’s alpha test over the course of its development, giving it a high degree of internal consistency and thus, reliability (Schmitt et al., 2007). The survey contained nine questions regarding conscientiousness, eight questions regarding neuroticism, and eight questions regarding extraversion.

The three personality subscales of conscientiousness, extraversion, and neuroticism were scored using a formula that calculated a numerical value for each personality dimension by adding each individual’s selected scores on the Likert scale, which were then averaged between all participants for an overall mean.

Figure 1. Results for the extraversion, conscientiousness, and neuroticism dimensions of the Big Five personality survey between tattooed and non-tattooed individuals. This figure shows significant results in the extraversion dimension, and no significant results in the conscientiousness and neuroticism dimensions.

Figure 1. Results for the extraversion, conscientiousness, and neuroticism dimensions of the Big Five personality survey between tattooed and non-tattooed individuals. This figure shows significant results in the extraversion dimension, and no significant results in the conscientiousness and neuroticism dimensions.

 

 Table 1. Demographic information collected. Participants were current students or alumni from 54 universities of various sizes in both rural and urban locations throughout the United States.

Table 1. Demographic information collected. Participants were current students or alumni from 54 universities of various sizes in both rural and urban locations throughout the United States.

Results

A total of N =521 individuals completed the survey. Of that 521, 411 were female and 110 were male. Participant age varied from 18 to 68 years old. Participants were current students or alumni from 54 universities of various sizes in both rural and urban locations throughout the United States. Two hundred sixty-six (51%) identified themselves as having no tattoos and two hundred fifty-five (49%) identified themselves as having tattoos.

A two-tailed independent sample t-test revealed no statistically significant difference in levels of conscientiousness between tattooed and non-tattooed individuals (p = .30; Figure 1).

Like conscientiousness, a two-tailed independent sample t-test revealed no statistically significance difference on the neuroticism personality scale between tattooed and non-tattooed individuals (p = .53; Figure 1).

Results revealed a statistically significant result regarding extraversion. A two-tailed independent sample t-test revealed a statistically significance difference between tattooed individuals (M = 3.41, SD = 0.77) and non-tattooed individuals (M = 3.21, SD = 0.83, p = .004; Figure 1).

Discussion

The purpose of this study was to investigate whether there were positive traits associated with individuals who have tattoos. It was proposed that tattooed individuals would score higher on the conscientiousness and extraversion domains and lower on the neuroticism domain as measured by the Big Five Inventory than their non-tattooed peers. Tattooed individuals scored significantly higher in extraversion than their non-tattooed peers, but there were no significant differences in conscientiousness or neuroticism between tattooed and non-tattooed individuals. Though tattooed individuals did not differ significantly in two of the three areas tested in this study, the significant difference in extraversion suggests that those individuals with one or more tattoos may display higher levels of task performance and proactivity in the business world (Pearsall & Ellis, 2006).

A growing body of literature suggests tattooed individuals display different personality traits than their non-tattooed counterparts, and this study lends further support to this hypothesis. Specifically, the present study supports the findings from several other studies that tattooed individuals consistently score higher in extraversion than their non-tattooed peers (e.g., Stirn, Hinz, & Brahler, 2006; Swami, 2012; Swami et al., 2012 Wohlrab, Stahl, Rammsayer, & Kappeler, 2007).

This study may be limited by the high proportion of female participants (n = 411) compared to and male participants n = 110). A study in which males and females are equally represented could be better extrapolated to the general public. However, a similar study, performed in 2012 with 45.6% male participants found very similar results to the present study; tattooed individuals scored significantly higher than non-tattooed individuals in extraversion, but did not score differently in any of the other Big Five personality dimensions (Swami et al., 2012).

Future research should be conducted with a more age-diverse sample, as the present study had a mean age of 24.47 years old. Though this study lends itself well to explaining the personality attributes of the younger generation, it does not shed any light onto the baby boomer generation, who are currently the individuals holding CEO, managerial, and most importantly, hiring positions over the younger population (Odgers Berndtson, 2012). Over the next decade, a mass exodus of baby boomers is expected to occur, leaving open positions for the younger generation (Odgers Berndtson, 2012). However, if baby boomers are still utilizing stigmatized hiring criteria regarding tattoos, they are excluding a class of individuals who are more proactive and task performance oriented than their age-matched peers (Pearsall & Ellis, 2006). Gathering more research regarding generational differences in personality attributes and attitudes towards tattoos may have the potential to change current hiring criteria.

 Table 2. Differences between tattooed and non-tattooed participants in the Big Five personality dimensions of extraversion, conscientiousness, and neuroticism. Tested using two-tailed independent sample t-tests, df = 530. *indicates statistical significance

Table 2. Differences between tattooed and non-tattooed participants in the Big Five personality dimensions of extraversion, conscientiousness, and neuroticism. Tested using two-tailed independent sample t-tests, df = 530. *indicates statistical significance

Additionally, examining the final two personality domains (agreeableness and openness to experience) in the Big Five Inventory may lead to further information regarding the relationship between tattoos and personality, which could divulge more information regarding desirable characteristics in employees. Agreeableness has been correlated with success in several specific job fields, such as those that require considerable interpersonal interaction. Similarly, the openness to experience dimension has predicted success in fields where teamwork and training performance are important (Barrick et al., 2001). Finally, associations between tattoos and personality could be further explored by examining whether the effect is binary (tattoo vs. non-tattoo) or a gradient (influenced by the quantity of tattoos).

Tattooing has rapidly become a prevalent phenomenon in western culture. It may therefore be time to reexamine the stigma attached to hiring tattooed individuals. Extraversion, which indicates higher levels of task performance and proactivity in a job setting (Pearsall & Ellis, 2006), is starting, through recent research, to become associated with tattooed individuals. The business industry stands to gain quality employees who may be well suited to long-term success and significant contributions to the company if hiring criteria regarding tattoos were to be reassessed (Sackett, Burris, & Ryan, 1989).

References

Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and performance at the beginning of the new millennium. What do we know and where do we go next? International Journal of Selection and Assessment, 9, 9-30.

Burgess, M., & Clark, L. (2010). Do the “savage origins” of tattoos cast a prejudicial shadow on contemporary tattooed individuals? Journal of Applied Social Psychology, 40, 746-764.

Horne, J., Knox, D., Zusman, J., and Zusman, M. (2007) Tattoos and piercings: Attitudes, behaviours, and interpretations of college students. College Student Journal, 41, 1011-1020.

John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The Big Five Inventory–Versions 4a and 54. Berkeley, CA: University of California, Berkeley, Institute of Personality and Social Research.

Li, N., Barrick, M.R., Zimmerman, R.D., & Chiabaru, D.S. (2014). Retaining the productive employee: The role of personality. The Academy of Management Annals, 8, 347-395.

Littell, A. E. (2003). The illustrated self: Construction of meaning through tattoo images and their narratives (Doctoral Dissertation). Retrieved from Proquest database. (Order No. AAI3077541).

Odgers Berndtson. (2012). After the Baby Boomers: A Next Generation of Leadership [Brochure]. London: England, Cass Business School.

Palaiou, K. & Furnham, A. (2014). Are bosses unique? Personality facet differences between CEOs and staff in five work sectors. Consulting Psychology Journal: Practice and Research, 66, 173-196.

Parry, A. (1933). Tattoo; Secrets of a strange art as practiced among the natives of the United States. Madison, WI: Simon and Schuster.

Pearsall, M.J., & Ellis, A. P. J. (2006). The effects of critical team member assertiveness on team performance and satisfaction. Journal of Management, 32, 575-594.

Roberts, T., & Ryan, S. (2002). Tattooing and high risk behavior in adolescents. Pediatrics, 110, 1058-1063.

Sackett PR, Burris LR, Ryan AM. (1989). Coaching and practice effects in personnel selection. In Coo per CL, Robertson IT (Eds.), International review of industrial and organizational psychology (pp. 145–183). New York: Wiley.

Schmitt, D. P., Allik, J., McCrae, R. R., Benet-Martínez, V., Alcalay, L., & Ault, L. (2007). The geographic distribution of Big Five personality traits: Patterns and profiles of human self-description across 56 nations. Journal of Cross-Cultural Psychology, 38, 173–212.

Stabile, S. (2013). The use of personality tests as a hiring tool: Is the benefit worth the cost?. University of Pennsylvania Journal of Business Law, 4, 279-288.

Stirn, A., Hinz, A., & Brahler, E. (2006). Prevalence of tattooing and body piercing in Germany and perception of health, mental disorders, and sensation seeking among tattooed and body-pierced individuals. Journal of Psychosomatic Research, 60, 531-534

Swami, V. (2012). Written on the body? Individual differences between British adults who do and do not obtain a first tattoo. Scandinavian Journal Of Psychology, 53, 407-412.

Swami, V., Pietschnig, J., Bertl, B., Nader, I. W., Stieger, S., & Voracek, M. (2012). Personality differences between tattooed and non-tattooed individuals. Psychological Reports, 111, 97-106.

Tate, J. C., & Shelton, B. L. (2008) Personality correlates of tattooing and body piercing in a college sample: the kids are alright. Personality and Individual Differences, 45, 281-285.

Wohlrab, S. M. (2007). Differences in personality characteristics between body-modified and non-modified individuals: Associations with individual personality traits and their possible evolutionary implications. European Journal Of Personality, 21, 931-951.

Wohlrab, S., Fink, B., & Kappeler, P. M. (2005). Human body ornaments from an evolutionary perspective – Diversity and function of tattoos, piercings, and scarification. Mitteilungen der Anthropologischen Gesellschaft Wein, 134/135, 1-10.

Wohlrab, S., Stahl, J., Rammsayer, T., & Kappeler, P. M. (2007) Differences in personality characteristics between body modified and nonmodified individuals and possible evolutionary implications. European Journal of Personality, 21, 931-951.

Access Full PDF: Got Ink? An Analysis of Personality Traits between Tattooed and Non-Tattooed Individuals

A Comparative Study of Herpetofauna in a Primary Forest and Reforested Area in Coastal Ecuador

Abstract | Introduction |Methods|Results | Discussion | Acknowledgements |
References |PDF

Abstract

Deforestation impacts tropical forests and alters habitats worldwide, which can negatively affect sensitive species. Reforestation efforts are currently increasing. However, little is known about herpetofaunal (reptile and amphibian) reconstruction post-reforestation, other than that it is a slow process. This study focuses on determining impacts of reforestation efforts in coastal Ecuador. Transects at similar elevations were established in a forested and reforested area in and near the Jama-Coaque Ecological Reserve. Visual Encounter Surveys (VES) were carried out, and species richness, abundance, composition, and density were determined. A Chi-square test was used to determine association between reptiles and amphibians within the two sites, and a t-test was used to determine if there was a significant difference in species density between the two sites. Five species of reptiles and eight species of amphibians were detected. Transects within the reforested area yielded six species (n = 27) and transects within the forested area yielded ten species (n = 132). Species richness and abundance were higher in the forested area, and a significant association was found between reptiles and amphibians within the two sites. Density was not significantly different between the two sites, and the species composition in both sites was representative of typical ecological community structures with little overlap between sites. These results provide baseline data to facilitate a better understanding of reforestation effects, which will contribute to the development of better management practices in remaining tropical forests and thus better protection of Earth’s biodiversity.

Introduction

Biodiversity is the foundation of life on Earth, and all organisms play roles in ecosystem processes as part of the food web or otherwise. Reptiles and amphibians are not only important to food webs, but they commonly link aquatic and terrestrial systems (Urbina-Cardona, 2008). Over half of all species on Earth, including almost 50% of amphibians and around 32% of reptiles, live in tropical forests, which comprise 7% of Earth’s terrestrial ecosystems (Urbina-Cardona, 2008). Many herpetofaunal (reptile and amphibian) species are in decline and at risk of extinction due to changes in how the land is used, which affect the environment. Amphibians are therefore important as key indicators of environmental health, since negatives changes to the environment such as habitat loss, introduced species, pollution and contaminants, pathogens, climate change, and interactions between these factors (McCallum, 2007) often have negative impacts on amphibian survival through reduced reproduction rates, loss of genetic diversity, population isolation, and changes in growth and activity levels (Urbina-Cardona, 2008). Rates of extinction are increasing exponentially, and current amphibian extinction rates are roughly 211 times greater than background rates, with an expected increase to 25,000-45,000 times greater than background rates within the next 50 years (McCallum, 2007). It is crucial that a sufficient amount of habitat is protected to ensure that not only are amphibians protected, but that all herpetofaunal species are protected, as they all play critical roles within their ecosystems and are important to maintain biodiversity as well as health and stability within their respective environments. The loss in biodiversity that would result from further decreases in reptile and amphibian populations would negatively impact critical ecosystem functions (McCallum, 2007), which would in turn affect the overall environment. This would negatively impact current human quality of life, as biodiversity supports growth of natural resources that humans rely on for food, shelter, medicine, and materials needed to produce commodities (Urbina-Cardona, 2008).

Herpetofaunal extinctions and extirpations (localized extinctions) occur for many reasons, including changes in land usage such as deforestation, landscape fragmentation, and landscape degradation, all of which result in decreases in available herpetofaunal habitat (Urbina-Cardona, 2008). Other reasons for the increase in extinction and extirpation rates include diseases and pathogens, pollution and toxin bioaccumulation, overexploitation, which includes pet trade trafficking and scientific collection, the introduction of invasive species, climate change, or combinations of these factors (McCallum, 2007; Urbina-Cardona, 2008). A loss of suitable habitat used for breeding sites, such as seasonal ponds that certain species rely on for breeding, affect the timing of duration of breeding periods for these individuals, which can increase the frequency of extinction and extirpation events. Habitat loss reduces the geographic ranges of individuals. In addition, it has a secondary effect of creating isolated populations, which decreases genetic diversity, as separate populations may be unable to cross between fragmented areas and through anthropogenic barriers in order to maintain genetic diversity through interbreeding. A combination of the previously mentioned factors, especially habitat loss and climate change, can also result in changes in the growth rates of individuals, as well as changes in activity patterns and microhabitat use (Urbina-Cardona, 2008).

Although deforestation is a worldwide issue, it is especially prominent in tropical countries such as Ecuador. Deforestation may have negative effects on reptile and amphibian species richness, reproductive success, community composition, and biomass (Enge & Marion, 1986). Availability of food as well as water for breeding may be altered due to deforestation (Enge & Marion, 1986). Edge effects resulting from deforestation may impact wind speed, moisture levels, temperature regimes, solar penetration, and vapour pressure near forest edges, which may adversely impact herpetofauna (Lehtinen, Ramanamanjato, & Raveloarison, 2003). Deforestation may also increase the amplitude of seasonal environmental fluctuations (Heinen, 1992), negatively effecting ectothermic herpetofauna, especially moisture-dependent amphibians (Lehtinen et al., 2003). As the negative effects of deforestation become more visible, reforestation efforts are on the rise, and it is important to understand how herpetofauna are affected by reforestation efforts. No published studies to date have followed the long term changes in herpetofauna that occur post-reforestation until forest maturation is reached, but a previous small-scale study indicated that herpetofaunal restoration is a slow process (Heinen, 1992). Additionally, this same small-scale study demonstrated that maintenance of primary forest (a naturally-occurring and relatively undisturbed forested area) is critical for the survival of some species (Heinen, 1992). Studies have been conducted to compare naturally forested areas to disturbed areas (Enge & Marion, 1986; Heinen 1992), but reforested areas should be of special concern because non-native species are often introduced for economic rather than ecological purposes. Trees are planted during reforestation efforts are typically planted in an organized fashion rather than in a way that mimics natural dispersion, causing further differences between reforested areas and their natural counterparts. More research is needed to fully understand the long-term impacts on herpetofaunal restoration after reforestation.

This study was conducted to determine whether there are differences in species richness, abundance, composition, and density between a forested area (primary forest) and a reforested area (reforested one year prior to the commencement of the study) at similar elevations within and near the Jama-Coaque Ecological Reserve, which is located in the Pacific coastal forests near Camarones, Manabi Province, Ecuador. We hypothesized that species richness, abundance, composition, and density would be lower in the reforested area compared to the forested area.

Because moisture and elevation influence the density, diversity, and biomass of herpetofaunal communities (Heinen, 1992), the chosen study areas were situated at similar elevations (249-435m above sea level) in close proximity. Studies at both forested and reforested sites were conducted in September 2013 during the dry season. At each site, one transect was established following a creek and two were established through terrestrial vegetation in order to obtain accurate representation from both terrestrial and aquatic habitats.

Methods

Study Area and Site Selction

The research was carried out in and near the Jama-Coaque Ecological Reserve, Camarones, Manabi province, Ecuador. The Reserve is located on one of the largest remaining tracts of undisturbed coastal equatorial primary forest, within the Tumbes-Chocó-Magdalena conservation corridor (J. Trujillo, personal communication, August 19, 2013). This biodiversity hotspot was partially deforested during road construction and the spread of agricultural lands between 1965 and 1975 (J. Trujillo, personal communication, August 19, 2013). With current reforestation efforts in these areas, it is an ideal location for gathering baseline data relevant to the herpetofaunal responses to reforestation.

The area in which this study was conducted contains both tropical wet forest and cloud forest. The average daily temperature and monthly rainfall of the area range from 19-28 °C with less than ten centimeters per month in the dry season and 24-31 °C with 2-100cm/month in the wet season (J. Trujillo, personal communication, August 19, 2013). The sites selected for this study included an area of primary forest on the reserve (Montanita trail; elevation of 252-435m above sea level) and a reforested area adjacent to the reserve (Site A in the Finca de Mandera, which is managed by Third Millennium Alliance through the Jama-Coaque Ecological Reserve; elevation of 249-283m above sea level). The primary forest vegetation consists of trees, small shrubs, and herbs. Vegetation in the reforested area consists of grasses and tree saplings of various native species. Three transects were established in each of the two sites.

Figure 1. Three 200 m transects through the Primary forest of Montanita trail in the Jama-Coaque Ecological Reserve; one through the river (#7), one on the southern portion of the trail, (#8) and one on the northern portion of the trail (#9).  All transects are illustrated in red, while the river is illustrated in blue, and the pre-existing Montanita trail in black. Figure 2. Three parallel 200 m transects through the reforested area near the Jama-Coaque Ecological Reserve; one through the creek (#2), one north of the creek (#3), and one south of the creek (#1). All transects are approximately equidistant from each other and are illustrated in red, while the creek is illustrated in blue, and the plot border in black.

Figure 1. Three 200 m transects through the Primary forest of Montanita trail in the Jama-Coaque Ecological Reserve. One through the river (#7), one on the southern portion of the trail, (#8) and one on the northern portion of the trail (#9). All transects are illustrated in red, while theriver is illustrated in blue, and the pre-existing Montanita trail in black.
Figure 2. Three parallel 200 m transects through the reforested area near the Jama-Coaque Ecological Reserve. One through the creek (#2), one north of the creek (#3), and one south of the creek (#1).All transects are approximately equidistant from each other and are illustrated in red, while the creek is illustrated in blue, and the plot border in black.

Establishment of Transects

The Global Positioning System (GPS) coordinates for sites were recorded using the Universal Transverse Mercator (UTM) coordinate system. One transect in the primary forest was established along the creek (Rio Camarones) at the entrance to the trail (start point: 17M 0597220mE 9987147mN; end point: 17M 0597078mE 9987013mN), one transect originated at the end of the ridge at the western edge of the trail and descended along the southern slope (start point: 17M 0596908mE 9987141mN; end point: 17M 0596740mE 9987098mN), and the third transect originated from the other end of the same ridge along the northern slope (start point: 17M 0596912mE 9987348mN; end point: 17M 0596758mE 9987394mN; Figure 1). All transects were 200m in length. The creek transect, established by two people equipped with a 30m measuring tape, extended 100m in each direction from the trail entrance. Both ends of the transect were labelled and marked with flagging tape, and GPS coordinates and elevation were measured with a GPS device and recorded. The other transects extended 200m eastward along the trail from each end of the western ridge, and both were established and documented in similar fashion to the creek transect. The creek and the trail were followed in order to minimize impact on the environment, but a machete was used for additional clearing where passage was difficult.

Transects in the reforested area were similarly established, but each transect was measured by one person using a 100m rope. One transect in the reforested area was established along a creek through the middle of the site (start point: 17M 0596889mE 9987847mN; end point: 17M 0596710mE 9987885mN). The second tract was located midway up the hills to the south (start point: 17M 0596901mE 9987741mN; end point: 17M 0596724mE 9987837mN). The third tract was positioned north of the creek (start point: 17M 0596701mE 9987919mN; end point: 17M 0596879mE 9987925mN; Figure 2). The transects on either side of the creek were roughly parallel to the creek and located along linear clearings in the grass where trees were planted one year prior to the study. All transects were 200m in length and extended east to west with a 50m buffer from the western edge of the site to exclude species from adjacent areas. The ends of each transect were labelled and marked with flagging tape, and GPS coordinates and elevation were measured with a GPS device and recorded.

Survey Methods

Visual encounter surveys (VES) were conducted three times for each transect over a three-week period during September 2013. Each VES began between 1400 and 1500 h or 2000 and 2100 h and required one to two hours to complete. To conduct VES, each transect was surveyed twice in the afternoon to identify diurnal individuals during the hottest portion of the day and once in the evening to identify nocturnal individuals. Researchers minimized disturbances in the transect area by walking as quietly as possible before, during, and after each survey. At the beginning of each VES, the date and start time were recorded, and current weather conditions were visually noted and recorded. Researchers then walked along each transect in a westward direction (except for the northern transect in the reforested area, which was traversed in an eastern direction due to difficulties accessing the transect from the east) while looking and listening for herpetofaunal individuals within two meters of each side of the transect where vegetation allowed. The substrate, grasses, shrubs, trees, other plants, and crevices were all thoroughly searched with the use of a hand-held flashlight. Areas under natural cover objects such as rocks, logs, and leaf litter were also inspected. Every attempt was made to return these objects to their original location to minimize disturbances. When an animal was located, a photograph was taken, GPS coordinates were recorded when the equipment was available to do so, and photo numbers, predominant vegetation, microhabitat, height off the ground, species, estimated size, and distance from water for creek transects were recorded. Snout-vent length was estimated for anurans while total length was estimated for lizards and snakes. The end time was recorded upon completion of the survey. The transects were left undisturbed for a minimum of 12 hours before returning to begin the following VES, and the transects were surveyed in the same order each time: one, two, three, seven, eight, nine.

Species Identification Methods

Whenever possible, species were identified in the field. Otherwise photos were used in conjunction with reference materials, including Ron, Guayasamin, Yanez-Muñoz, Merino-Viteri, & Ortiz (2013), Savage (2002), Torres-Carvajal, Salazar-Valenzuela, & Merino-Viteri (2013) and Valencia (2008). When these resources were not sufficient to identify the species of an individual, a local herpetologist, Ryan Lynch of The Biodiversity Group, was consulted.

Table 1. The species composition of reptiles and amphibians detected during Visual Encounter surveys for a forested and reforested area in and near the Jama-Coaque Ecological Reserve.

Table 1. The species composition of reptiles and amphibians detected during Visual Encounter surveys for a forested and reforested area in and near the Jama-Coaque Ecological Reserve.

Analysis

Species richness was calculated for each transect in order to take microhabitat into consideration and for each site in order to compare the two study areas overall. This was accomplished by calculating and comparing the number of species and individuals within species detected in each area. Counts of individuals were used to estimate abundance of each species for each site for general comparisons, and a Chi-square test was carried out to determine whether the numbers of amphibians and reptiles detected in the two study areas were comparable. A two by two contingency table with one degree of freedom was created, and the Yates’ correction was applied. Species composition was determined in order to look at species evenness, community structure, and species overlap between sites. Species density, defined as individuals per meter, was estimated for each transect, and a Student’s t-test was employed to determine whether there was a significant difference between species densities between forested and reforested sites.

It was assumed that data from the transects were representative of the site, all individuals had an equal chance of being observed, all samples were random, all observations were independent, all abiotic factors were equal and did not affect data collection, and observer bias was consistent throughout the study.

Results

Overall, five species of reptiles (n = 11) and eight species of amphibians (n = 148) were observed (Table 1). Of these, the forested site had higher species richness, hosting three species of reptiles (five individuals) and seven species of amphibians (127 individuals). The reforested site hosted three species of reptiles (six individuals) and three species of amphibians (21 individuals). The transects running through creeks had higher numbers of individuals in both sites; 109 of the 132 individuals in the forested site and 21 of the 27 individuals in the reforested site were found in the creek transects. Although the forested site had a higher abundance overall, there was a statistically significant difference between the number of reptiles compared to amphibians between the two sites, χ2 (df =1, n =159) = 9.14, p < .05. Upon closer inspection of the contingency table, the calculated value for reptiles in the reforested area is much greater than the other calculated values, indicating an association between these factors. As the observed frequency of reptiles in the reforested area is greater than the expected frequency, we can infer a positive relationship between reptiles and reforested areas.

Epipedobates machalilla was the most abundant herptile species at both sites, accounting for roughly 70% of the total individuals observed and for the major numerical difference in abundance between the two sites (Figures 3 and 4). Of the two most abundant species in the forested area (Figure 3), E. machalilla was commonly seen near the river, while Pristimantis achatinus was commonly found in more wooded habitats, including along the river. Of the two most abundant species in the reforested area (Figure 4), E. machalilla was commonly observed near the creek, whereas Stenocercus iridescens was found along the edge of vegetation. Species composition overlapped between the sites (3/13, 23% of species, 121/159, 76.1% of individuals), but most species were found at only one site.

The mean herptile density was estimated for each of the two sites, and a t-test was used to determine if there was a significant difference. Although species density was higher in the forested site (M = 0.073 individuals/m, SD = 0.109) compared to the reforested site (M = 0.015 individuals/m, SD = 0.017), the difference was not statistically different as determined by a Student’s t-test (p > .05).

Figure 3. The species composition for the reptiles and amphibians detected in a section of primary forest in the Jama-Coaque Ecological Reserve in coastal Ecuador.

Figure 3. The species composition for the reptiles and amphibians detected in a section of primary forest in the Jama-Coaque Ecological Reserve in coastal Ecuador.

 

Figure 4. The species composition for the reptiles and amphibians detected in a reforested area near the Jama-Coaque Ecological Reserve in coastal Ecuador.

Figure 4. The species composition for the reptiles and amphibians detected in a reforested area near the Jama-Coaque Ecological Reserve in coastal Ecuador.

Discussion

We hypothesized that species richness, abundance, composition, and density would differ between the forested and reforested sites and that these measures would be lower in the reforested area compared to forested. Species richness and abundance were higher in the forested site, and the presence of water increased richness on transects within both sites. Differences in the abundance of reptiles and amphibians was found to be statistically significant between the two sites; more reptiles were observed in the reforested site. Although E. machalilla represented 70% of all species observed and there was some overlap of species, most species were found in only one site. However, there was no statistically significant difference between the mean species densities of the two sites.

Low numbers for both species richness and abundance of amphibians in the reforested site compared to the forested site could be due to the well-documented sensitivity of amphibians to human disturbance. Low amphibian numbers in the reforested area may be a result of lingering impacts of habitat alteration, changes in food and habitat availability, changes in water quality that negatively impact breeding success, or direct mortality during both deforestation and reforestation efforts, which was observed during clear-cutting operations in Florida (Enge & Marion, 1986). Reduction of tree cover may also contribute to lower numbers of amphibians in the reforested area by increasing climatic extremes such as temperature and moisture (Heinen, 1992). Lehtinen et al. (2003) found that frogs in Madagascar avoided open areas during the dry season, possibly due to the permeable nature of amphibian skin, which could account for these results.

It is possible that a significant positive association between reptiles and reforested area was observed because reptiles are less sensitive than amphibians. Thus, these results suggest reptiles may be the first step in herpetofaunal reconstruction, which would explain the higher prevalence of reptiles in this disturbed area compared to amphibians.

In this study, abundance was lower in the reforested site, which contrasts with Heinen’s (1992) findings in Costa Rica, where both abundance and biomass increased in disturbed sites. Lower abundance in the reforested site may have been due to the dry season and the decreased ability of the reforested area to retain moisture in the air, resulting in reduced amphibian activity levels. This could also account for the positive association detected between reptiles and the reforested area. Reptiles do not have permeable skin and are therefore not as sensitive to dry conditions as amphibians.

Although many reptilian individuals were observed on the outer edges of the reforested area, these were not detected during the surveys, which took place in the interior. Tall grass may have reduced visibility adjacent to the transect on the ground and produced errors in sampling if not all individuals present were detected. The survey was completed during the end of the dry season, which most likely resulted in low amphibian activity compared to warmer, wetter seasons. Scott (1976) found that approximately four amphibian species disappear for each additional month of the dry season in tropical forests.

According to Verberk (2011), more than half of all individuals in any given area typically belong to one species. The species composition observed in this study supports this idea, as approximately 70% of individuals detected in each site were E. machalilla. This also indicates that the sites surveyed likely follow a typical community structure, and further surveys could be employed to detect a few rare and many moderately abundant species. However, small sample sizes could be a source of error for this assumption, as larger sample sizes may have showed a different distribution. Additionally many E. machalilla were transformlings (immature frogs at a life stage between tadpoles and fully formed adults), which could be a source of error if this was the only species reproducing at this time, resulting in a higher proportion of them than normal relative to other species in the area.

In general, herpetofaunal densities are greater in wet areas than in dry areas, and flat terrains harbour more individuals than slopes (Heinen, 1992; Scott, 1976). There was no significant difference between the mean densities in each site. All transects except those located in water were on slopes, and the highest concentration of herpetofauna in each site was observed in the flat wet transects. Roughly 70% of the individuals at both sites were E. machalilla, a species that favours these same habitat types.

It is possible that a higher number of species in the primary forest resulted from the proximity of this wet forest to nearby cloud forest, as the perpetually moist conditions in cloud forests have been shown to reduce habitat seasonality of amphibian species (Scott, 1976).

Both sites may have been disturbed by heavy clearing activity nearby, which occurred shortly after the establishment of the transects and before sampling commenced. Limitations also include an inability to identify some animals that were not photographed and small sample sizes. Higher sample numbers could be achieved by prolonged sampling, specifically into the wet season and shortly thereafter. The use of VES in conjunction with other methods such as pitfall traps and quadrat leaf litter searches may have also have yielded more data because pitfall traps typically find cryptic species more effectively, and use of a combination of methods increases study effectiveness, improves sample representation, and reduces the number of surveys needed, thus decreasing human impact (Ribeiro-Júnior, Gardner, & Ávila-Pires, 2008). Pitfall traps also reduce observer bias (Enge & Marion, 1986).

Although these results indicate no significant difference between the mean density of the two sites, the lower species richness and abundance in the reforested area, a positive association between reptiles and the reforested area, and only 23% overlap in species composition between the two areas indicates that further studies monitoring yearly changes would be of value. Restoration of herpetofauna is a lengthy process with no known time frame, and as areas continue to be reforested, maintenance of primary forest will continue to be vital. This will ensure that reforestation of areas will not negatively impact the long term success of species present prior to deforestation. By learning more about the effects of reforestation on herpetofauna through further research, the impacts human actions have on these species can be better understood so that more effective ways of protecting and conserving them in areas of high human disturbance can be developed. This will increase the chance of survival of rare species and key indicator species that play important roles in ecosystems and life functions. Further research will contribute to the protection of the biodiversity of the Jama-Coaque Ecological Reserve and other areas impacted by reforestation efforts and changes in land usage.

Acknowledgements

I thank John Acorn for his support and help with editing, my family for all their support while carrying out this research, and Maddy Ballard, Bianca Fierro, and Uver Vaka for their help during the research process. Thank you to Ryan Lynch for assistance with species identification, Third Millenium Alliance for providing accommodations and access to the reserve, Jordan Trujillo for guidance with both the research and writing of the initial report, and the support of many others throughout the survey and writing process.

References

Enge, K. M., & Marion, W. R. (1986). Effects of clearcutting and site preparation on herpetofauna of a north Florida flatwoods. Forest Ecology and Management, 14, 177-192.

Heinen, J. T. (1992). Comparisons of the leaf litter herpetofauna in abandoned Cacao plantations and primary forest in Costa Rica: Some implications for faunal restoration. Biotropica, 24(3), 431-439.

Lehtinen, R. M., Ramanamanjato, J-B., & Raveloarison, J. G. (2003). Edge effects and extinction proneness in a herpetofauna from Madagascar. Biodiversity and Conservation, 12, 1357-1370.

McCallum, M. L. (2007). Amphibian decline or extinction? Current declines dwarf background extinction rate. Journal of Herpetology, 41(3), 483-491.

Ribeiro-Júnior, M. A., Gardner, T. A., & Ávila-Pires, T. C. S. (2008). Evaluating the effectiveness of herpetofaunal sampling techniques across a gradient of habitat change in a tropical forest landscape. Journal of Herpetology, 42(4), 733-749.

Ron, S. R., Guayasamin, J. M., Yanez-Muñoz, M. H., Merino-Viteri, A., & Ortiz, D. A. (2013). AmphibiaWebEcuador.

Savage, J. (2002). The amphibians and reptiles of Costa Rica: A herpetofauna between two continents, between two seas. Chicago, Illinois: University of Chicago Press.

Scott, N.J. Jr. (1976). The abundance and diversity of the herpetofaunas of tropical forest litter. Biotropica, 8(1), 41-58.

Torres-Carvajal, O., Salazar-Valenzuela, D., & Merino-Viteri, A. (2013). ReptiliaWebEcuador.

Urbina-Cardona, J. N. (2008). Conservation of neotropical herpetofauna: research trends and challenges. Tropical Conservation Science, 1(4), 359-375.

Valencia, J. H. (2008). Guia de campo de reptiles del Ecuador. Quito, Ecuador: Fundacion Herpetologica Gustavo Orces.

Verberk, W. C. E. P. (2011). Explaining general patterns in species abundance and distributions. Nature Education Knowledge, 3(10), 38.

Access Full PDF: A comparative study of herpetofauna in a primary forest and reforested area in coastal Ecuador.

Interval Estimation of Glomerulus Filtration Rate from Iohexol Clearance

Abstract | Introduction |Methods and Materials|Results | Discussion and Conclusion|
References |PDF

Abstract

Glomerular filtration rate (GFR) is the amount of fluid the kidney filters through the glomeruli per unit time. It is used to evaluate renal function, since a low value indicates poor kidney function. Serial plasma concentrations of an exogenous marker over time can be used to estimate GFR. In this paper, we will demonstrate how the delta method can be applied to approximate the standard error of estimated GFR, thus allowing the provision of an interval estimate for GFR, using two pharmakinetic models: the single compartment and the non-compartmental. These results were applied to canine observations of plasma iohexol concentrations. We found that the non-compartmental model results in narrow confidence intervals (CI) and that the GFR estimate from a single compartment model is close to that of the non-compartmental model. In the single compartment model, the size of the standard error, and thus the width of the interval, increases as GFR increases. The closeness of the two GFR estimates is reassuring to practitioners who routinely use the single compartment model for simplicity. However, the width of the CI for the single compartment model is of concern when concentration is observed at only a few time points. The non-compartmental model offers the advantage of relatively small standard errors and narrow CIs. This work contributes to the body of knowledge on the estimation of GFR by showing how routinely derived estimates vary due to sampling, indicating the level of importance that clinicians can ascribe to a single estimate of GFR.

Introduction

Glomerular Filtration Rate

The kidneys are responsible for carrying out one of the vital functions in the human body: filtration of the accumulated waste out of the blood. The kidneys excrete the particles they filter from the plasma via the urinary system. This filtering occurs inside special clusters of blood vessels in the kidneys called glomeruli. Glomerular filtration rate (GFR) is a clinical quantity used as an index for evaluating renal function, where a low value would be indicative of improper kidney functioning. GFR is the volume of the plasma filtered through the glomeruli per unit time (mL/min). In practice, this value can only be estimated, since unobservable quantities such as the surface area of the glomerular capillaries and various hydrostatic and osmotic pressures are needed to calculate the true GFR. With chronic kidney disease in humans on the rise, there is an increasing demand for an accurate estimation of GFR (Stevens, Coresh, Greene, & Levey, 2006).

Estimating the Glomerular Filtration Rate

Since the GFR cannot be measured directly, the rate of change of the concentration of an inert marker in the plasma can serve as an approximation of GFR, as long as that marker is filtered solely by the glomerular. Endogenous, meaning naturally occurring in plasma, or exogenous, meaning injected, substances have been used for this purpose. Currently, creatinine is used as one such endogenous marker (Tanner, 2009). However, its level in the plasma is closely related to the muscle mass of the individual, so estimated GFR in humans is normalized by considering several factors, such as gender, race, and age (Bagshaw & Bellomo, 2009). Various non-radioactive substances have been used as exogenous markers. For example, inulin and iothalamate have been used to approximate the GFR, and these markers have been shown to give a more accurate GFR than endogenous markers such as creatinine (Tanner, 2009).

The plasma clearance rate of an exogenous marker is determined by gathering consecutive measurements of plasma concentration until a zero level is reached. Then, as we will show later, the GFR is the initial dose D (mg) divided by the area under the concentration versus time curve (min mg/mL), because the area under the concentration-time curve is the mass-time per unit volume. However, waiting for the marker concentration to reach zero is time-consuming, particularly when the renal clearance is poor, so the observation time is usually less than the amount of time it would for the concentration to reach zero. This clinical constraint creates a problem, which is how to estimate the area of the unobserved tail of the concentration-time curve.

Models of Glomerular Filtration Rate

One complication that arises when considering possible mathematical models for the concentration-time curve is that the initial dose does not mix uniformly with the body fluids immediately. This results in the initial part of the concentration curve having a different character than the tail of the curve. Various mathematical formulations, such as the single compartment model and the non-compartmental model, have been used for modeling the concentration curve.

Standard Errors of Glomerular Filtration Rate

Because of the relative numerical complexity of estimating GFR by these methods, there are various commercially available products designed for this purpose. However, not all make the variability of the estimates they produce readily available to the end user, such as WinNonlin (Pharsight Corp 2012). The intent of this paper is to demonstrate how to apply the delta method (Casella & Berger, 2002) to approximate the standard error of estimated GFR, thus allowing the provision of an interval estimate rather than just a point estimate of GFR. We will then compare the efficiency of the two models used to calculate the GFR and its standard error (Heiene & Moe, 1998).

Th Delta Method

The delta method was first described by economist Robert Dorfman in 1938 (Dorfman, 1938; Ver Hoef, 2012). The idea of the delta method is as follows: standard formulas are available for calculating the standard error of linear combinations of the coefficients from regression models (Casella & Berger, 2002), but such formulas are not possible for non-linear combinations. Thus, the delta method strategy is used to approximate the non-linear function by a linear function by using a first order Taylor series (Stewart, 2012).

Application to Canine Data

Due to the ethical considerations involved in human studies, an attractive alternative is to use animals as models to study renal function. Various successful animal models are used for human anatomical prediction. For kidney function, both dog and rat models are advantageous because their renal clearance is similar to that of humans. However, dogs are preferred over rats as a model for human renal function since the size of the canine kidney is closer to that of humans (Paine, Ménochet, Denton, McGinnity, & Riley, 2011). We will apply the statistical methods we develop here to canine data collected previously, which includes observations of plasma concentrations of an exogenous marker, iohexol, post bolus intravenous infusion.

Methods and Materials

Single Compartment Model

Fluids in the body reside in different locations and move between these locations by diffusion. Two major reservoirs of body fluids are blood plasma and the fluid between tissues, known as interstitial fluid. The interstitial flood surrounds individual cells and is important in transporting nutrients and waste into and out of cells.

The single compartment model assumes that the iohexol mixes into the body fluids immediately upon injection and that it does so perfectly, meaning that its concentration is uniform throughout the entire plasma and interstitial fluid volume. Mathematically, this assumption is embodied by the iohexol plasma concentration fitting an exponential curve,

(1) 1

where C0>0 is the initial concentration of iohexol (gm/mL), and λ>0 is the rate of concentration decrease ((gm/mL)min) at time (t). By dividing λ by the amount of the initial iohexol dose (D), we obtain the clinical estimate of GFR (mL/min), which we now want to estimate statistically from observations of plasma iohexol concentration.

Integration of the function in Equation (1) from zero to infinity shows that the area under the curve is given by

(2) Screen Shot 2015-11-15 at 4.10.33 PM

where AUC is the area under the curve. Thus, if we can estimate the AUC and the constant C0, we will be able to calculate λ, and in turn the GFR, using Equation (2).

Given a set of concentration-time data, we can estimate AUC and C0 as follows. Taking the natural logarithm of both sides of Equation (1) gives the linear expression

(3) Screen Shot 2015-11-15 at 4.10.42 PM

where β1 will be negative. The β values are estimated by fitting a simple linear regression model (Kutner, Nachtsheim, Neter, & Li, 2005, Section 1.8) to the natural log transformed concentration observations. The estimates of β0 and β1 are denoted by b0 and b1, respectively, and other estimates are denoted with hats. Notice that b0 is the estimate of ln(C0) and β1 is the estimate of –λ. The GFR from the single compartment model is then estimated by applying Equation (2):

Screen Shot 2015-11-15 at 4.10.49 PM

The smallest number of observations necessary to estimate the GFR using the single compartment model is two, because two points are needed to determine a straight line. Such estimates have zero standard error due to fact that b0 and b1 are uniquely defined by the two observations. However, when more than two observed points are used, the GFR estimate will have non-zero standard error of the GFR, which we will approximate using the delta method. This entails approximating the non-linear function of the two variables b0 and b1 with its first order Taylor series (Stewart, 2012, Section 11.10) about (b0, b1):

Screen Shot 2015-11-15 at 4.10.59 PM

so that

Screen Shot 2015-11-15 at 4.11.12 PM

The final variance formula includes population parameters b0 and b1that are unknown, so in practice we replace them with their estimates. In summary:

(4)Screen Shot 2015-11-15 at 4.11.21 PM

where ln(c)=b0 + b1t is the simple linear regression model fit, and

(5) Screen Shot 2015-11-15 at 4.11.28 PM

Here Var(b1), Cov(b0, b1t), and Var(b0) are the components of the variance-covariance matrix (Kutner et al., 2005, Section 5.13) from the simple linear regression model, Equation (3).

Non-Compartmental Model

The assumption of the single compartment model, that the iohexol mixes into the body fluids instantly and uniformly, is an oversimplification. Actually, we can think of the body as having two distinct compartments of fluid, that in the plasma and that in the interstitial fluid. The injected iohexol initially disperses into the plasma quickly, but also begins moving into the interstitial fluid by diffusion once its concentration in the plasma is high enough. This is happening at the same time as a well-functioning kidney is removing the iohexol by filtration. Thus the initial decline in iohexol concentration is actually the sum of two physiological processes – filtration and diffusion. Over time, the concentration of iohexol in the plasma will become lower than that in the interstitial fluid and so diffusion will begin to operate in the other direction, releasing iohexol back into the plasma. The non-compartmental model attempts to be flexible enough to model this complex process without undue mathematical intricacy.

In the case where multiple observations of concentration over time are available, the area under the observed part of the curve can be calculated by the trapezoidal rule (Stewart, 2012, Section 7.7):

Screen Shot 2015-11-15 at 4.11.36 PM

where AUCobs is the area under the curve during the observational period. This eliminates the need to make an assumption of the form of the concentration curve. To estimate the area between t = 0 and the first observed point (t1, c1) the single compartment model is used. The line fit through the natural log transform of the first two observed points, (t1, c1) and (t2 , c2), is extrapolated back to t = 0. Integration is then used to find the area from (t0, a0) to (t1, c1), yielding:

(6) Screen Shot 2016-02-02 at 10.17.02 AM

where AUCpre is the area under the curve prior to when observations began and a0 is the estimated y-intercept, a0 = ln(c1)-t1 (ln(C2)-ln(C1)/(t2-t1). A similar approach is used to find the area under the curve from the last observed point (cn, tn) out to infinity. By fitting a simple linear regression model to the last three or more log transformed points, and integrating from the last observed point tn to infinity, the result is:

(7) Screen Shot 2015-11-15 at 4.11.52 PM

where AUCtail is the area under the curve after observations were stopped.

Due to the fact that this approach requires at least three observed points, it is essential to decide how many points to use in the estimation of the decay rate in the tail portion. The model that results in the highest R2 is used, following the approach used by WinNonlin (Pharmasight Corp., 2012) (R2 is a number between 0 and 1 indicating the closeness of a fitted line to observed points, in which 1 indicates perfect fit). The total AUC is the sum of the AUCpre, AUCobs and AUCtail. The estimated GFR is then the dose D divided by the AUC:

(8) Screen Shot 2016-02-02 at 10.17.22 AM

and ln(c)=b0+b1t is least squares fit to the tail with the greatest R2.

The standard error of the estimated GFR was determined by the delta method. We begin with the first order Taylor series of the inverse of AUC about β1:

Screen Shot 2015-11-15 at 6.23.29 PM

where A= AUCpre + AUCobs. Then the variance is calculated as follows:

Screen Shot 2016-02-02 at 10.17.47 AM

Thus the standard error of the GFR estimated by the non-compartmental method is

(9) Screen Shot 2016-01-31 at 4.36.08 PM

Figure 1. Observed iohexol log-concentration versus time curve for Dog 18, together with estimated simple linear regression line (solid) and 95% CI (dashed). Using the single compartment model, the estimated AUC=126 (Equation (2)), estimated GFRS=2.15 (Equation (4)), standard error of GFRS=0.11 (Equation (5)), and the 95% CI is (0.51, 0.75) (Equation (10)).

Figure 1. Observed iohexol log-concentration versus time curve for Dog 18, together with estimated simple linear regression line (solid) and 95% CI (dashed). Using the single compartment model, the estimated AUC=126 (Equation (2)), estimated GFRS=2.15 (Equation (4)), standard error of GFRS=0.11 (Equation (5)), and the 95% CI is (0.51, 0.75) (Equation (10)).

 

Figure 2. Observed iohexol log-concentration versus time curve for Dog 1, together with estimated simple linear regression line (solid) and 95% CI (dashed). The intercept of the fitted line is 0.39, slope=-0.003, and the R2 for the model was 93.3%. Using the single compartment model, the estimated AUC=425 (Equation (2)), estimated GFRS=0.63 (Equation (4)),  standard error of GFRS=0.05 (Equation (5)), and the 95% CI is (0.51, 0.75) (Equation (10)).

Figure 2. Observed iohexol log-concentration versus time curve for Dog 1, together with estimated simple linear regression line (solid) and 95% CI (dashed). The intercept of the fitted line is 0.39, slope=-0.003, and the R2 for the model was 93.3%. Using the single compartment model, the estimated AUC=425 (Equation (2)), estimated GFRS=0.63 (Equation (4)), standard error of GFRS=0.05 (Equation (5)), and the 95% CI is (0.51, 0.75) (Equation (10)).

 

Figure 3. Observed iohexol concentration versus time curve for Dog 18. Using the non-compartmental compartment model, AUCpre=8.84 (Equation (6)), AUCobs=115 (Equation (5)), the slope of the regression line of tail=-0.009, the R2 for the model is 100%, from a single compartment model of the tail AUCpost=3.93 (Equation (7)), the total AUC=127, estimated GFRN=2.140 (Equation (8)), the standard error of GFRN=0.0008 (Equation (9)), and the 95% CI is (2.13, 2.15) (Equation (10)).

Figure 3. Observed iohexol concentration versus time curve for Dog 18. Using the non-compartmental compartment model, AUCpre=8.84 (Equation (6)), AUCobs=115 (Equation (5)), the slope of the regression line of tail=-0.009, the R2 for the model is 100%, from a single compartment model of the tail AUCpost=3.93 (Equation (7)), the total AUC=127, estimated GFRN=2.140 (Equation (8)), the standard error of GFRN=0.0008 (Equation (9)), and the 95% CI is (2.13, 2.15) (Equation (10)).

 

Table 1. Estimated GFR using the single compartment method (GFRS Equation (4), SE(GFRS) Equation (5) and 95% CI Equation (10)).

Table 1. Estimated GFR using the single compartment method (GFRS Equation (4), SE(GFRS) Equation (5) and 95% CI Equation (10)).

Standardization for Size

Estimated GFR values are measured in mL/min for each dog. Since there is variation in the body sizes of each dog, the GFR estimates are divided by body weight. This results in estimated GFR measured in mL/min/kg.

Estimated GFR using the non-compartmental method (GFRS Equation (8), SE(GFRS) Equation (9) and 95% CI Equation (10)).

Estimated GFR using the non-compartmental method (GFRS Equation (8), SE(GFRS) Equation (9) and 95% CI Equation (10)).

Confidence Interval for Estimated GFR

Using the standard errors of the estimated GFR derived above, the 95% confidence interval (CI) for estimated GFR was calculated by the usual method:

(10) Screen Shot 2016-01-31 at 4.36.31 PM

where α = 0.05, n is the number of observations used in the regression model, and t is the student’s t-distribution (Casella & Berger, 2002, Section 5.3.2). For the single compartment model, n is the number of observations, and in the non-compartmental model, n is the number of observations used in the regression model of the tail that gives the highest R2. All data manipulation and analysis were carried out using Stata MP (Version 13.1; StataCorp LP, 2012).

Data Collection

The study was conducted on 21 dogs from a single colony that was maintained at Texas A&M University. These dogs were female carriers of one defective copy of a gene that causes kidney disease (that is, X-linked hereditary nephropathy (XLHN). These female dogs have slowly progressing kidney disease. The iohexol plasma concentration measurement protocol went as follows: dogs were given a bolus intravenous infusion of a mixture of 270mg I/kg of iohexol over 5min into a cephalic vein, then flushed with 2mL of saline. Blood was then drawn at 5, 15, 30, 60, 120, 180, 240, and 360min following Minute 2 of the infusion. This paper is a secondary analysis of data collected for a previous analysis (Nabity et al., 2013). The study protocols were reviewed and approved by the Texas A&M University Institutional Animal Care and Use Committee.

  Figure 4. Observed iohexol concentration versus time curve for Dog 1. Using the non-compartmental compartment model, AUCpre=10.01 (Equation (6)), AUCobs=286 (Equation (5)), the slope of the regression line of tail=-0.003, the R2 for the model was 99.1%, from a single compartment model of the tail AUCpost=167.97 (Equation (7)), the total AUC=464, estimated GFRN=0.580 (Equation (8)), the standard error of GFRN=0.0113 (Equation (9)), and the 95% CI is (0.54, 0.62) (Equation (10)).


Figure 4. Observed iohexol concentration versus time curve for Dog 1. Using the non-compartmental compartment model, AUCpre=10.01 (Equation (6)), AUCobs=286 (Equation (5)), the slope of the regression line of tail=-0.003, the R2 for the model was 99.1%, from a single compartment model of the tail AUCpost=167.97 (Equation (7)), the total AUC=464, estimated GFRN=0.580 (Equation (8)), the standard error of GFRN=0.0113 (Equation (9)), and the 95% CI is (0.54, 0.62) (Equation (10)).

 

Figure 5. Estimated GFR by single and non-compartmental methods for all dogs, together with 95% CIs. (GFRS Equation (4) and GFRN Equation (8))

Figure 5. Estimated GFR by single and non-compartmental methods for all dogs, together with 95% CIs. (GFRS Equation (4) and GFRN Equation (8))

Results

The 21 dogs were all moderately large, weighing between 18.0 and 30.3kg with a mean of 22.9kg. Figures 1 and 2 display examples of log-concentration verses time curves used in the single compartment model for two different dogs. In Figure 1 (Dog 18), the linear model appears to fit to the data well except for the initial part of the curve. On the other hand, in Figure 2 (Dog 1) the linear model does not appear to fit as well, and the steepness in the initial part of the curve is more evident. Seeing this curvature in the initial part of the log-concentration versus time curve makes sense anatomically, because this is when the biological marker, iohexol, is being distributed in the body. The distribution phase is followed by the elimination phase, when the kidney is filtering out the marker, which is why the initial curve is followed by a constant decrease. These features impact the R2 of the models, which were 99% to 93%, respectively.

Table 1 lists the means and 95% CI of estimated GFRs for the single compartment method, for all dogs. The table indicates that the standard error, and thus the width of the CI, increases as GFR increases. This is because GFR is a factor in the standard error (Equation (5)).

Figures 3 and 4 show examples of concentration verses time curves using the non-compartmental model for the same two dogs. Both figures are on a same scale. However, notice that in Figure 4 the concentration at 360min is still high compared to that in Figure 3, where the concentration is almost at zero. Thus, the AUCtail is considerably larger for Dog 1 than Dog 18. Table 2 lists the means and 95% CI of estimated GFRs for the non-compartment method of all dogs. Notice the R2 values in both tables are very high, all greater than 99%. This time, the standard error (and thus the width of the CI) decreases as GFR increases. This is due to the term cn/b1 in Equation (9), which decreases so rapidly as GFR increases that even though it is divided by AUC, the standard error is still decreasing, particularly when the GFR is small.

Figure 5 displays the GFRs estimated by both methods for all dogs together with their 95% CI. Notice that CIs from the non-compartmental method are much narrower than that for the single compartment method, but the two estimated values from the two methods are quite similar. Thus, when eight points between 5 and 360minutes are used, the single compartment model provides an adequate estimate of the GFR, despite having a comparatively wide CI.

Discussion and Conclusions

GFR is used for evaluating renal function and is estimated using the relationship between the serial plasma concentrations of an exogenous marker and time. This paper demonstrates how the delta method can be applied to approximate the standard error of estimated GFR, thus allowing the provision of an interval estimate for GFR, using two models: the single compartment and the non-compartmental. These results were applied to data on 21 dogs, consisting of eight observations between 5 and 360min. As a result, we have found that the non-compartmental model results in very narrow CIs, and that the GFR estimate from single compartment model is close to that of the non-compartmental model, despite the fact that its CI is considerably wider.

The CI for the single compartmental model is wide because it fits a simple linear regression model to all the log-concentrations and, because the iohexol dissipates quickly into the interstitial fluid initially, the log-concentration curve is not a straight line. On the other hand, the non-compartmental model uses the observed concentrations in a deterministic way, meaning that only the area from the unobserved tail portion of the concentration curve is estimated statistically. While this appears to be beneficial for obtaining small standard errors, it actually gives little information as to the accuracy of the point estimate. However, the close relationship between the GFR estimates from the two methods, at least in this case of eight observed points for each dog, allays that concern to some extent.

There is no previous literature for estimating standard errors of GFR estimated in a direct manner. Complex treatments of interval estimation in pharmacokinetic–pharmacodynamic modeling, while valuable for their rigor, are largely impenetrable to practitioners and clinicians (Bauer, Guzy, & Ng, 2007; Csajka & Verotta, 2006). Thus this paper constitutes a valuable and accessible contribution to the literature on interval estimation for estimated GFR.

References

Bauer, R. J., Guzy, S., & Ng, C. (2007). A survey of population analysis methods and software for complex pharmacokinetic and pharmacodynamic models with examples. The AAPS Journal, 9(1), E60-E83.

Bagshaw, S. M., & Bellomo, R. (2009). Kidney Function Tests and Urinalysis in Acute Renal Failure in Ronco, C., Bellomo, R., & Kellum, J., (Eds.) Critical Care Nephrology (2nd ed.) (pp. 253). Philadelphia, PA. Saunders Elsevier.

Casella, G., & Berger, R. L. (2002). Statistical inference (Vol. 2)(Section 5.5.4): Pacific Grove, CA: Duxbury.

Csajka, C., & Verotta, D. (2006). Pharmacokinetic–pharmacodynamic modelling: History and perspectives. Journal of Pharmacokinetics and Pharmacodynamics, 33(3), 227-279.

Dorfman, R. (1938) A note on the d-Method for Finding Variance Formulae. The Biometric Bulletin, 1(4), 129-137.

Heiene, R., & Moe, L. (1998). Pharmacokinetic aspects of measurement of glomerular filtration rate in the dog: a review. Journal of Veterinary Internal Medicine, 12(6), 401-414.

Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied Linear Statistical Models. Chicago, IL: McGraw-Hill Irwin.

Nabity, M. B., Lees, G. E., Boggess, M., Yerramilli, M., Obare, E., Yerramilli, M., Aguiar, J., Relford, R. (2013). Week-to-week variability of iohexol clearance, serum creatinine, and symmetric dimethylarginine in dogs with stable chronic renal disease. Journal of Veterinary Internal Medicine, 27(3), 734-734.

Paine, S. W., Ménochet, K., Denton, R., McGinnity, D. F., & Riley, R. J. (2011). Prediction of human renal clearance from preclinical species for a diverse set of drugs that exhibit both active secretion and net reabsorption. Drug Metabolism and Disposition, 39(6), 1008-1013.

WinNonlin [Computer Software]. (2012). Princeton, NJ: Pharsight Corp.

StataCorp LP [Computer Software]. (2012). 4905 Lakeway Drive, College Station, TX: StataCorp LP.

Stevens, L. A., Coresh, J., Greene, T., & Levey, A. S. (2006). Assessing Kidney Function — Measured and Estimated Glomerular Filtration Rate. The New England Journal of Medicine, 354(23), 2473-2483.

Stewart, J. (2012). Calculus Early Transcendentals (7 ed.) (Section 11:10). Belmont CA: Cengage.

Tanner, G. (2009). Kidney Function. In R. Rhoades & D. Bell (Eds.), Medical Physiology: Principles for Clinical Medicine (pp. 396-397). Philadelphia, PA: Lippincott Williams & Wilkins.

Ver Hoef, J. M. (2012). Who invented the delta method? The American Statistician, 66(2), 124-127.

Access Full PDF: Interval estimation of Glomerulus Filtration Rate from Iohexol Clearance

Want to get published? Submit your research to JYI by clicking on The Journal -> Submit. Be sure your faculty advisor fills out the advisor approval form!
Follow Us
For all the latest news from JYI, join our Facebook.
For all the latest news from JYI, join our Youtube.
For all the latest news from JYI, join our twitter.
For all the latest news from JYI, join our email list.
Translate