Total Electron Content (TEC) Variations and Correlation with Seismic Activity over Japan
Earthquakes are extremely dangerous physical phenomena. The ability to properly forecast them would go a long way in reducing the damage they cause. One earthquake forecasting method being researched uses the ionospheric Total Electron Content (TEC). Our investigation used TEC data from 2011 during certain days near and on the date of the earthquake off the coast of Tōhoku, Japan. We took advantage of the large amounts of GPS records obtained by the GPS Earth Observation Network (GEONET) of Japan which contained the TEC data needed. This data was used to visualize the TEC over the course of the day of the Tōhoku earthquake. The video produced abnormalities consistent with the predicted effect an earthquake has on the ionosphere. These abnormalities were shown to not be caused by solar and geomagnetic activity. These results suggest that detectable ionospheric activity precedes earthquakes. Ionospheric disturbances are also known to be caused by other confounding factors such as solar and geomagnetic activity. Careful analysis is included in this paper to exclude this class of disturbances from those that are seen to occur due to seismic and pre-earthquake activities. The hope is that the potential correlation between seismic and pre-earthquake activities may be used as an earthquake precursor towards the development of an earthquake forecasting method.
Recently, research has focused on the relationship between the state of the ionosphere and seismic activity (Heki & Ping, 2005; Oyama, Kakinami, Liu, Abdu, & Cheng, 2011). One ionospheric parameter that has been investigated is the Total Electron Content (TEC). TEC is defined as the number of electrons along a path between a receiver (rx) on the surface of the earth and a GPS satellite (st) in orbit. The TEC can be computed with the line integral from a GPS satellite and a receiver as described in (1).
r = is range or radial position (meters)
Θ = latitude (degrees)
Φ = longitude (degrees)
t = time (seconds)
With regards to the correlations between TEC and pre-earthquake and seismic activities, the TEC is an important parameter of study because it has the potential for showing the changes in the ionosphere due to these activities. It is because seismic and pre-earthquake activities create stress in rocks in the earth’s crust. These stresses are known to positively charge the rocks on the earth’s crust. As the positive charges accumulate at the rocks outer surfaces, they create a difference in potential which in turn creates a flow of charges that can travel fast and far from their point of origin. As the charges travel upward under the influence of the electric field lines between the surface of earth and the bottom of the ionosphere, they reach the bottom of the ionosphere, disturbing the equilibrium of the electrons in the ionosphere (Freund, Takeuchi & Lau, 2006). These disturbances can be seen in the TEC which makes TEC a potential candidate as an earthquake precursor. If TEC disturbances could be used as an earthquake precursor, tracking those disturbances could be used as part of an earthquake forecasting system which would improve earthquake warning systems, in turn saving countless lives.
This study uses TEC data from Japan and current knowledge of the Tōhoku Japan earthquake to determine whether pre-earthquake and seismic activities correlate with TEC changes around the time of the earthquake.
TEC disturbances can be observed using GPS signals. It is because the ionosphere creates a phase delay in the electromagnetic signals, sent from a receiver on earth to a GPS satellite in orbit. The phase delays change based on several variables: the frequency of the emitted signals, the path from the receiver to the satellite and the associated electron density along the path. The phase delay can be used to estimate the distance (called pseudo-range) from the GPS satellite and the receiver. More specifically, since the ionosphere is a dispersive medium (i.e., varies with frequency), using two signals with known GPS frequencies, we can measure the phase delay between the returned signals to estimate the TEC as shown in (2).
Prs = The differential pseudo-range measurements
f1,f2 = The GPS measurement frequencies: 1,575.42 and 1,227.60 MHz
TEC = The Total Electron Content (m/s2)
DCB = The biases in the measurements
From these measurements, we can test if seismic activity causes detectable disturbances in the ionosphere’s TEC.
The visualization of TEC data over Japan required GPS readings from the GPS Earth Observation Network (GEONET) of Japan (http://datahouse1.gsi.go.jp/terras/terras_english.html). From these readings, TEC values were obtained using the relationship in (2), resulting in multiple data points for each receiver over time. The GPS data was contained in receiver independent exchange format (RINEX) observation data files. These files contain phase delay measurements for GPS signals, as well as the time at which these measurements were made. These files were processed to extract TEC data over time, which was done using a FORTRAN program. The FORTRAN program was originally developed by Professor Kosuke Heki of Hokkaido University in Japan. The TEC data over time was then input into MATLAB (The Mathworks Inc., Natick, MA, 2013) for analysis and visualization. We averaged the data points at each site and time, resulting in one data point for each site every 30 seconds (the sampling period of the GPS receivers). This data was interpolated using a linear triangulation method, resulting in one grid of interpolated data over Japan every 30 seconds. We turned these grids into contour plots, each plot creating a frame for the video file produced.
Since disturbances in the ionosphere are not necessarily caused by seismic activity, we considered other possible sources of disturbances in our analysis. We obtained measurements of 10.7 cm solar flux (F10.7), the sunspot number (SSN), and earth’s geomagnetic storm activity (Kp index) over the month of March in 2011. This data was provided by the National Oceanic and Atmospheric Administration’s Space Physics Interactive Data Resource (SPIDR) (http://spidr.ionosonde.net/spidr/). This data was compared to other research (Hasbi et al., 2009) (Ouzounov et al., 2011) to determine whether any ionospheric irregularities could have been caused by solar or geomagnetic sources.
The video of TEC levels produced for February 11th and March 11th show the behavior of the ionosphere throughout the day. From the video on March 11th, after an initial enhancement (Figure 1A), a sudden depletion of roughly 30 TECU in 15 minutes was observed from 1:35 PM JST to 1:50 PM JST (Figure 1B). Large fluctuations were also observed, with a significant example from 4:25 PM JST (Figure 1C) to 5:00 PM JST (Figure 1D) showing levels decreasing by 50 TECU in 45 minutes. For comparison, plots from February 11th (one month before the earthquake) were taken at the same times (Figure 2). Data from the same times of day show no enhancements, depletions, or fluctuations.
Sunspot number data for March 2011 shows a sunspot count of roughly 100 on March 8, coming down to 60 on March 11th (Figure 3A). Geomagnetic storm activity data for March 2011 shows a Kp of roughly 5.3 on March 11th, with relatively low numbers before March 11th (Figure 3B). Solar flux (F10.7) data for March 2011 shows heavy activity on March 7th with measurements of more than 900 watt per square meter-hertz, with relatively calm activity for the rest of the month (Figure 3C).
It has been proposed that TEC anomalies can be caused by seismic activity. Our study looked at GPS data during the 2011 Tōhoku Earthquake to investigate the correlation between TEC and seismic activity. The investigation resulted in evidence supporting detectable TEC disturbances caused by pre-seismic activity.
TEC measurements taken over the course of the earthquake showed significant TEC anomalies during seismic activity compared to a time period with no seismic activity. The TEC on the day of the earthquake shows an initial enhancement, followed by a rapid depletion and large fluctuations. Comparatively, TEC levels on a day with no seismic activity behave smoothly.
While these results provide strong evidence indicative of seismic activity causing TEC disturbances, it is necessary to make sure no other source caused these enhancements, depletions, and fluctuations. While the values are relatively high for the sunspot number (SSN) and magnetic storm activity (Kp) (Figure 3A-B) compared to expected levels, prior work has shown these values alone are not significant to cause a disturbance of this magnitude in the ionosphere (Hasbi et al., 2009). The solar flux peaked for the time frame analyzed on March 7th, four days before the earthquake (Figure 3C). Based on previous studies, the solar flux around the day of the earthquake is likely not the cause of the TEC disturbances on the day of the earthquake due to the timing of the maximum flux (Ouzounov et al., 2011).
This study had limitations with certain systemic sources of error in the measurements of TEC, as reflected in the DCB term shown in (2). The data analyzed did not take into account the angle through which the GPS signals passed through the ionosphere. This leads to the GPS signals passing through a larger section of the ionosphere when the satellite is not directly above the receiver. Another source of error that should be considered is certain biases introduced from the GPS satellites and receivers, as well as the troposphere. These biases are introduced due to many factors, such as satellite ephemeris inaccuracies, clock errors, as well as atmospheric conditions and measuring equipment temperature. These errors were out of the scope of this study, and should be looked into further.
It is highly probable that the earthquake caused the enhancements, depletions, and fluctuations in the TEC. However, more detailed work is needed to help determine the exact cause and nature of TEC events surrounding seismic activity. Future investigations should focus on tracking and monitoring TEC for longer periods of time surrounding an earthquake to better characterize the beginning and length of disturbances. In addition, further research should be performed to determine how solar flux, magnetic storms, or other external events affect TEC disturbances.
The pattern of TEC enhancements, depletions, and fluctuations expected by previous works (Heki & Ping, 2005) were noticed in TEC measurements near the time of the earthquake. These TEC behaviors are quite different from the average expected behaviors resulting from a smooth ionosphere. These TEC behaviors are also seen not to result from or be correlated with other outside phenomena such as solar flux and geomagnetic storms. Our results support the hypothesis that seismic activity increases disturbances in TEC. Future research should focus on developing methods to isolate the effects of seismic activities from confounding factors and to remove errors from TEC measurements.
Freund, F. T., Takeuchi, A., & Lau, B. W. (2006). Electric currents streaming out of stressed igneous rocks – A step towards understanding pre-earthquake low frequency EM emissions. Physics and Chemistry of the Earth, 31(4-9), 389-396. doi:10.1016/j.pce.2006.02.027
Garner, T., Gaussiran II, T., B.W., T., R.B., H., Calfas, R., & Gallagher, H. (2008). Total electron content measurements in ionospheric physics. Advances in Space Research, 42, 720-726. doi:10.1016/j.asr.2008.02.025
Hasbi, A. M., Momani, M. A., Mohd Ali, M. A., Misran, N., Shiokawa, K., Otsuka, Y., & Yumoto, K. (2009). Ionospheric and geomagnetic disturbances during the 2005 Sumatran earthquakes. Journal of Atmospheric and Solar-Terrestrial Physics, 71(17-18), 1992-2005. doi:10.1016/j.jastp.2009.09.004
Heki, K., & Ping, J. (2005). Directivity and apparent velocity of the coseismic ionospheric disturbances observed with a dense GPS array. Earth and Planetary Science Letters, 236, 845-855. doi:10.1016/j.epsl.2005.06.010
Ouzounov, D., Pulinets, S., Romanov, A., Romanov, A., Tsybulya, K., Davidenko, D., . . . Taylor, P. (2011). Atmosphere-Ionosphere Response to the M9 Tohoku Earthquake Revealed by Joined Satellite and Ground Observations. Preliminary results. Earthquake Science, 24(6), 557-564. doi:10.1007/s11589-011-0817-z
Oyama, K., Kakinami, Y., Liu, J. Y., Abdu, M. A., & Cheng, C. Z. (2011). Latitudinal distribution of anomalous ion density as a precursor of a large earthquake. Journal of Geophysical Research, 116(A4). doi:10.1029/2010JA015948
Chemical Reduction and Deposition of Nanostructured Pt–Au Alloy
Nanostructured metal alloys made up of Pt and another metal are more efficient in catalysing reactions than pure Pt nanoparticles. However, few studies have investigated low heat, solvent-free chemical deposition techniques of nanostructured metal alloys. This paper investigates the deposition of Pt–Au nanostructured metal alloy on fluorine-doped tin oxide glass via the low heat, solvent-free polyol reduction and the effect of Pt:Au mass loading ratio on the catalytic performance. The deposition process involves drop-casting the metal precursors, H2PtCl6 and HAuCl4, on the glass substrates and reducing the precursors with vaporised ethylene glycol at 170°C for 15 minutes. The scanning electron microscope revealed that the structure of Pt–Au alloy changes from three-dimensional globular nanostructures to two-dimensional triangular and hexagonal nanoplates and to three-dimensional nanocrystals as the Au concentration increases. The X-ray photoelectron spectroscopy confirmed that the precursors on glass substrates were reduced to metallic Pt and Au. The electrocatalysis of CH3OH, with the Pt–Au glass substrates as work electrodes, showed that Pt–Au alloys have better catalytic performances than those of pure Pt and the catalytic rate peaks at a certain Pt:Au mass loading ratio.
Platinum (Pt) nanoparticles act as catalysts in proton exchange membrane (PEM) fuel cells powering machinery (Bing, Liu, Zhang, Ghosh, & Zhang, 2010; Ouyang & Cho, 2011). Using H2 or liquid fuels like CH3OH, PEM fuel cells, made up of acid-soaked PEM placed in between the anode and cathode catalyst, oxidize the fuel at the cathode and reduce the oxygen entering the cell. This creates a potential difference, V, that drives an electric current. Said electric current can be used to power a variety of applications. The fuel cell could use a Pt plate or Pt coated substrate as either the anode or cathode catalysts. It has been reported in studies that by combining Pt with other metals to form nanostructured metal alloys (NMA), the adsorption of carbonaceous poisoning species like CO is suppressed (Ren et al., 2010). Such poisoning species tend to permanently bind themselves to the catalyst, leaving less sites for the oxidation and reduction of chemical species responsible for driving the electric current. Less adsorption of poisoning species in NMA catalysts can lead to enhanced catalytic performance.
NMA can be chemically deposited on substrates in the same way as pure metal nanoparticles do. There are several deposition methods, of which the chemical reduction method is the most popular method (Herricks, Chen & Xia, 2004; Ouyang & Cho, 2011; Skrabalak, Wiley, Kim, Formo, & Xia, 2008). The chemical reduction method uses a reducing agent, such as NaBH4 and LiBEt3H, to reduce metal precursors to their pure metallic form (Gonsalvesa, Rangarajan & Wang, 2000). By controlling the experimental conditions, like the pH and precursor concentration, the shape, size and composition of NMA are well-controlled, making this method the most popular (Ouyang & Cho, 2011).
Most studies employ complex chemical reduction methods to produce NMA with high catalytic capabilities. They include heating at high temperatures over 300°C (Ganesan, Freemantle, & Obare, 2007; Jana, Dutta, Bera, & Koner, 2008) or using complicated postnanoparticle immobilisation processes like layer-by-layer deposition (Ouyang & Cho, 2011). A convenient chemical reduction method by Ouyang & Cho (2011) is the low heat, solvent-free polyol reduction. The reducing agent used is ethylene glycol (EG), which is vaporized under low heat (below 200°C) so that the vapour will reduce metal precursors. EG is then oxidized to aldehydes and carboxylic acids. Gaseous products from this reaction will escape into the air, leaving the NMA end-product free of any liquid organic compounds. The formed NMA will have well-defined shapes and good adhesion to glass substrates. Thus, no additional steps of mixing metal precursors with surfactants and additives, which control the shapes, are required.
Here we investigate the deposition of Pt–Au NMA of varying Pt:Au mass loading ratios on fluorine-doped tin oxide (FTO) glass substrates using the low heat, solvent-free polyol reduction by Ouyang & Cho (2011). We evaluate the hypothesis that the low heat, solvent-free polyol reduction method is able to produce the Pt–Au NMA that has better catalytic ability than that of pure Pt. We also investigate the role that the Pt:Au ratio plays in determining the catalytic capability of Pt–Au NMA.
Materials and Methods
An FTO glass sheet was cut into 1cm×1cm pieces for the SEM imaging and 1cm×6cm pieces for CV. They were then sterilized by subjecting them to sonication in an ultrasonic bath using distilled water and isopropyl alcohol. The chemicals used were EG, chloroplatinic acid hexahydrate (H2PtCl6.6H2O) and gold (III) chloride trihydrate (HAuCl4.3H2O) obtained from Sigma-Aldrich. The respective acid salts were then each dissolved in 90% ethanol to form 0.01M solution.
Deposition of Pt–Au NMA
NMA were deposited by reducing the metal precursors, H2PtCl6 and HAuCl4, with EG vapor. NMA of varying Pt:Au mass loading ratios were to be investigated. Hence, three 40ml solutions with different metal precursor mixing ratios were prepared. The mixing ratios were such that once they are reduced, they produce the required Pt:Au ratio (1:1, 1:2 and 1:3) on the glass substrates. A 40ml solution consisting of only H2PtCl6 without HAuCl4 was also prepared.
In a typical experiment, 10ml of the prepared solution was drop-casted onto the glass substrate. After drying the metal precursor layer at room temperature for 15 minutes, the substrate was placed into a large petri dish containing a small petri dish filled with EG. The large petri dish was then placed on a hot plate at 170°C for 15 minutes. The large petri dish was covered with a glass lid so as to create a closed environment, which allowed the vaporised EG to be contained within the dish and to effectively reduce the metal precursors. After 15 minutes, the glass cover was removed and the heating continued for another ten minutes in order to dry the substrate.
Characterization of Pt–Au NMA
To study the morphologies of NMA, SEM images of NMA were taken with a Zeiss Supra 40 field emission scanning electron microscope, with the extra high tension voltage fixed at 5kV under the secondary electron imaging mode. The XPS spectra of NMA, which are used to confirm if the metal precursors are successfully reduced to their metallic forms, were acquired using an Axis Ultra DLD X-ray photoelectron spectrometer equipped with an Al Ka1 X-ray source of 1486.6eV.
Electrochemical Catalysis of Pt–Au NMA
The CV was used to investigate the oxidation of CH3OH with the NMA-doped glass as the catalyst. The NMA glass substrate, a saturated calomel electrode and a Pt plate were used as the work, reference and counter electrode respectively. The electrolyte consisted of 0.5M H2SO4 and 0.5M CH3OH. The potential scan rate was 20mVs-1.
Pt–Au NMA Characterization Using SEM
Differing morphologies are observed under the SEM, for the three NMA of different Pt:Au mass loading ratios (Figure 1). For the 1:1–NMA, three-dimensional (3D) globular nanostructures were observed (Figure 1a). For the 1:2–NMA, 2D triangular and hexagonal nanoplates covered with mesh-like structures were observed (Figure 1b). For the 1:3–NMA, 3D nanocrystals were observed (Figure 1c).
Pt–Au NMA Characterization Using XPS
The three XPS spectra of the three NMA display four sets of peaks located around 71.0, 74.2, 83.7 and 87.5eV (Figure 2). The corresponding binding energies of the peaks will be compared with those of Pt and Au in existing literature to determine if the NMA are made up of metallic Pt and Au.
Electrochemical Catalytic Performance of Pt–Au NMA
From the oxidation of CH3OH on the Pt–Au NMA catalyst, the varying oxidation current shown in the cyclic voltammogram can determine the NMA’s catalytic performance. The 15th cyclic voltammograms of the NMA-FTO glass of different Pt:Au ratios, as well as the control FTO glass doped with only Pt, display two distinct current peaks (Figure 3). The right peak, If, which was produced by the forward sweep and the left peak, Ib, which was produced by the backward sweep when the potentials are ~ 0.7V and ~ 0.5V respectively. The If and Ib values for the Pt and various Pt–Au NMA catalysts are listed in Table 1.
Pt–Au NMA Characterization Using SEM
The differing morphologies exhibited by different NMA can be partially explained by existing literature and the specific surface energies of Pt and Au. Existing studies have shown that Au nanostructures are usually 2D (Cho, Mei, & Ouyang, 2012), while Pt nanostructures are 3D (Cho & Ouyang, 2011; Shen et al., 2008) when grown on substrates. By referring to their lowest surface energy crystallographic plane (111), given that Pt and Au have face-centered cubic structures, the specific surface energies of Pt and Au are 2.299 J m–2 and 1.283 J m–2 respectively (Vitos, Ruban, Skriver, & Kollar, 1998). The high surface energy of Pt causes Pt nanoparticles to cluster together, in order to minimize the surface area of Pt, for maximum stabilization, leading to the formation of 3D (rather than 2D) nanostructures. In contrast, the formation of 2D Au nanoplates (which have wide surface areas) is energetically feasible due to the low surface energy of Au. Thus, the highest concentration of Pt nanoparticles in the 1:1–NMA, compared to other NMA, can explain the observed overall 3D globular nanostructures dominated by Pt in the 1:1–NMA. Similarly, the more Au nanoparticles in the 1:2–NMA compared to the 1:1–NMA led to overall visible growths of 2D nanoplates dominated by Au in the 1:2–NMA but not in the 1:1–NMA.
However, no existing studies provide insight on the formation of 3D nanocrystals in the 1:3–NMA, which has more Au nanoparticles than that of 1:2–NMA. The transformation from 2D nanoplates to 3D nanocrystals may suggest that with increasing Au concentration, it is not energetic feasible for 2D nanoplates to exist. The higher total surface energy of Pt–Au system results in Pt and Au nanoparticles to cluster together to form crystalline structures for stabilization. Nevertheless, further research is necessary to explicate the transition from 2D nanoplates to 3D nanocrystals above the threshold Au concentration for Pt–Au NMA. Since the morphology controls the chemical reaction sites governing catalytic activity, understanding the morphological change can perhaps explain the different catalytic performances of Pt–Au NMA of varying ratios when used in fuel cells (Bing et al., 2010).
Pt–Au NMA Characterization Using XPS
As seen in the results section, for all three NMA, the binding energies associated with four sets of peaks are close to those reported in existing literature (Cho et al., 2012; Cho & Ouyang, 2011; Ye et al., 2011). These studies report that for metallic Pt and Au, the Pt4f bands lie around 71 and 74eV, and the Au4f bands lie around 84 and 88eV. Thus, from the spectra, the binding energies of the first two peaks (71.0 and 74.2eV) belong to Pt4f bands and the last two peaks (83.7 and 87.5eV) belong to Au4f bands. This indicates that the metal precursors were successfully reduced to their metallic forms in all NMA.
Electrochemical Catalytic Performance of Pt–Au NMA
From the voltammograms, the If peak was attributed to the three-step adsorption of CH3OH on Pt sites of NMA (Bagotzky, Vassilliey, & Khazova, 1977; Manoharan & Goodenough, 1992):
The adsorbed H species will subsequently be lost into the solution as H+ ions, while liberating electrons that contribute to the If peak. In addition, OH species will be adsorbed throughout the three-step oxidation process, leading to the oxidation of adsorbed carbonaceous species in Equation 1 (CH2OH, CHOH and COH) to form CH2O, HCOOH and CO2 (Bagotzky, Vassilliey, & Khazova, 1977; Manoharan & Goodenough, 1992). Should the adsorbed carbonaceous species fail to be oxidised by OH species, at higher voltages, adsorbed COH species will be oxidised to CO and CO2, thereby liberating electrons that contribute to the If peak as well according to Equation (2a) (Manoharan & Goodenough, 1992):
The catalyst’s performance can be determined by calculating the ratio of If/Ib. A lower ratio indicates a poorer catalytic performance. This is because a lower ratio corresponds to a higher Ib peak, which implies that more CO molecules are adsorbed to the catalyst and greater CO oxidation rate. The tabulated ratios in Table 1 show that the FTO glass doped with only Pt has the lowest ratio. Also, modifying the Pt:Au ratio would result in different tolerance performances, which the Pt:Au ratio of 1:2 yields the highest If/Ib ratio and best catalytic performance.
Comparing between Pt-doped catalyst with only Pt and Pt–Au NMA catalysts, Au plays a key role in enhancing the catalytic performance of Pt as described by Choi et al. (2006). It is because Au is responsible for increasing the oxidation rate of HCOOH by inducing a major oxidation reaction pathway, which reduces the formation of CO. Thus, lower amount of CO accumulates in Pt–Au catalysts, which reduce catalytic poisoning and lead to lower Ib peak in NMA compared to Pt-doped catalyst.
While the incorporation of small amount of Au leads to better catalytic performance, based on the current study that investigates three different Pt:Au ratios (1:1, 1:2 and 1:3), higher Au concentration above a threshold Pt:Au ratio of 1:2 can instead reduce the catalytic performance. According to Wang et al. (2010) and Ye et al. (2011), since the Fermi energies of Pt and Au are different, Au modifies the electronic structure of Pt–Au NMA, resulting in electrons tending to flow from Pt to Au. At high Au concentration, electrons are less available in Pt for adsorption of OH species that oxidize CO in Equations (2b) and (2c). In addition, as Pt (but not Au) is primarily responsible for the chemisorption activities, excessively high Au concentration in Pt–Au NMA implies that less Pt sites are available for adsorption of OH species. It is possible that less adsorbed OH species lead to more incomplete oxidation of CH3OH, more accumulation of CO and higher Ib peak. Hence, these explanations justify the peak catalytic performance at the Pt:Au ratio of 1:2.
In conclusion, this investigation had successfully deposited Pt–Au NMA on the FTO glass via the low heat, solvent-free polyol reduction. From SEM imaging, different Pt:Au loading ratios resulted in different morphologies, as attributed to differing total surface energies of Pt–Au systems. The XPS spectra confirmed that the precursors were reduced to metallic Pt and Au. Studying the electrochemical catalysis of methanol confirmed the hypothesis that the NMA-doped glass had a better catalytic performance than the Pt-doped glass. Also, another hypothesis that was verified is varying the Pt:Au ratio affects the catalytic performance, which its performance peaks at the Pt:Au ratio of 1:2. The investigated method provides a convenient alternative for producing high-performing catalysts compared to the high heat, solvent-based methods that are currently employed in industries. Future research can explain the changing morphologies of Pt–Au NMA with respect to varying Pt:Au ratio, which can assist other research seeking formulate new methods of controlling the morphology of Pt–Au NMA during the NMA production in order to develop optimally-performing NMA. Also, Pt–Au NMA of other Pt:Au ratios, apart from the three currently studied ratios (1:1, 1:2 and 1:3), can be characterized by other research to understand the catalytic performance change by increasing the Pt:Au ratio. This has important implications on situations where there is limited Pt or Au and there is a need to produce Pt–Au NMA of maximal catalytic capability with limited resources.
This project is conducted under the 2013 Undergraduate Research Opportunities Programme supported by the Faculty of Engineering at National University of Singapore. The author is grateful to his advisors, Dr. Karimbintharikkal G. Nishanthand Prof. Ouyang Jianyong for providing the necessary guidance for this project, as well as the Laboratory Office of Department of Materials Science and Engineering for providing the materials and equipment for this project.
Bagotzky, V. S., Vassiliev, Y. B., & Khazova, O. A. (1977). Generalized scheme of chemisorption, electrooxidation and electroreduction of simple organic compounds on platinum group metals. Journal of Electroanalytical Chemistry and Interfacial Electrochemistry, 81(2), 229–238. doi: 10.1016/0368-1874(77)80299-2.
Bing, Y., Liu, H., Zhang, L., Ghosh, D., & Zhang, J. (2010). Nanostructured Pt-alloy electrocatalysts for PEM fuel cell oxygen reduction reaction. Chemical Society Reviews, 39(6), 2184–2202. doi: 10.1039/b912552c.
Cho, S. J., Mei, X., & Ouyang, J. (2012). Direct deposition of gold nanoplates and porous platinum on substrates through solvent-free chemical reduction of metal precursors with ethylene glycol vapor. Physical Chemistry Chemical Physics, 14(45), 15793–15801. doi: 10.1039/c2cp42164j.
Cho, S. J., & Ouyang, J. (2011). Attachment of platinum nanoparticles to substrates by coating and polyol reduction of a platinum precursor. The Journal of Physical Chemistry C, 115(17), 8519–8526. doi: 10.1021/jp2001699.
Choi, J. H., Park, K. W., Park, I. S., Kim, K., Lee, J. S., & Sung, Y. E. (2006). A Pt–Au nanoparticle electrocatalyst for methanol electro-oxidation in direct methanol fuel cells. Journal of the Electrochemical Society, 153(10), A1812–A1817. doi: 10.1149/1.2224055.
Ganesan, M., Freemantle, R. G., & Obare, S. O. (2007). Monodisperse thioether-stabilized palladium nanoparticles: synthesis, characterization, and reactivity. Chemistry of Materials, 19(14), 3464–3471. doi: 10.1021/cm062655q.
Gonsalvesa, K. E., Rangarajana, S. P., & Wang, J. (2000). Chemical synthesis of nanostructured metals, metal alloys, and semiconductors. Handbook of Nanostructured Materials and Nanotechnology, 1, 1–56. doi: 10.1016/b978-012513760-7/50004-6.
Herricks, T., Chen, J., & Xia, Y. (2004). Polyol synthesis of platinum nanoparticles: Control of morphology with sodium nitrate. Nano Letters, 4(12), 2367–2371. doi: 10.1021/nl048570a.
Jana, S., Dutta, B., Bera, R., & Koner, S. (2008). Immobilization of palladium in mesoporous silica matrix: preparation, characterization, and its catalytic efficacy in carbon–carbon coupling reactions. Inorganic Chemistry, 47(12), 5512–5520. doi: 10.1021/ic8004294.
Liu, Z., Ling, X. Y., Su, X., & Lee, J. Y. (2004). Carbon-supported Pt and PtRu nanoparticles as catalysts for a direct methanol fuel cell. The Journal of Physical Chemistry B, 108(24), 8234–8240. doi: 10.1021/jp049422b.
Mancharan, R., & Goodenough, J. B. (1992). Methanol oxidation in acid on ordered NiTi. Journal of Materials Chemistry, 2(8), 875–887. doi: 10.1039/jm9920200875.
Ren, H., Humbert, M. P., Menning, C. A., Chen, J. G., Shu, Y., Singh, U. G., & Cheng, W. C. (2010). Inhibition of coking and CO poisoning of Pt catalysts by the formation of Au/Pt bimetallic surfaces. Applied Catalysis A: General, 375(2), 303–309. doi: j.apcata.2010.01.018.
Shen, Q., Jiang, L., Zhang, H., Min, Q., Hou, W., & Zhu, J. J. (2008). Three-dimensional dendritic Pt nanostructures: sonoelectrochemical synthesis and electrochemical applications. The Journal of Physical Chemistry C, 112(42), 16385–16392. doi: 10.1021/jp8060043.
Skrabalak, S. E., Wiley, B. J., Kim, M., Formo, E. V., & Xia, Y. (2008). On the polyol synthesis of silver nanostructures: glycolaldehyde as a reducing agent. Nano Letters, 8(7), 2077–2081. doi: 10.1021/nl800910d.
Vitos, L., Ruban, A. V., Skriver, H. L., & Kollar, J. (1998). The surface energy of metals. Surface Science, 411(1), 186–202. doi: 10.1016/s0039-6028(98)00363-x.
Wang, C., Tian, W., Ding, Y., Ma, Y. Q., Wang, Z. L., Markovic, N. M., … & Sun, S. (2010). Rational synthesis of heterostructured nanoparticles with morphology control. Journal of the American Chemical Society, 132(18), 6524–6529. doi: 10.1021/ja101305x.
Xu, J. B., Zhao, T. S., Yang, W. W., & Shen, S. Y. (2010). Effect of surface composition of Pt–Au alloy cathode catalyst on the performance of direct methanol fuel cells. International Journal of Hydrogen Energy, 35(16), 8699–8706. doi: 10.1016/j.ijhydene.2010.05.008.
Ye, W., Kou, H., Liu, Q., Yan, J., Zhou, F., & Wang, C. (2012). Electrochemical deposition of Au–Pt alloy particles with cauliflower-like microstructures for electrocatalytic methanol oxidation. International Journal of Hydrogen Energy, 37(5), 4088–4097. doi: 10.1016/j.ijhydene.2011.11.132.
Access Full PDF: Chemical Reduction and Deposition of Nanostructured Pt-Au Alloy
Probability Current and a Simulation of Particle Separation
The structure of scattered wave fields and currents is of interest in a variety of fields within physics such as quantum mechanics and optics. Traditionally two-dimensional structures have been investigated; here we focus on three-dimensional structures. We make a generic study of three dimensional quantum box cavities, and our main objective is to visualize the probability current. Visualizations are achieved for complex linear combinations of wave functions with different excitations and with boundary conditions: Dirichlet, Neumann, and mixed. By using different boundary conditions, the results reported here are relevant to many different wave analogues such as microwave billiards and acoustic cavities. Visualization was mainly done through animated images, but a chaotic state was visualized by 3D printing. Our results suggest that if the state of excitation is the same in the different boundary conditions, the current is the same, except at the boundaries of the box. Application to sort nanoparticles in acoustic cavities is considered.
Mapping and understanding the structure of wave fields and currents is relevant to a variety of structures such as quantum structures, optical and acoustic cavities, microwave billiards and water waves in a tank (Berggren & Ljung, 2009; Berggren, Yakimenko, & Hakanen, 2010; Blümel, Davidson, Reinhardt, Lin, & Sharnoff, 1992; Stöckmann , 1999; Chen, Liu, Su, Lu, Chen, & Huang, 2007; Ohlin & Berggren, 2016; Panda & Hazra, 2014). Here, cavities and billiards stand for systems with hard walls such that a particle or a wave only scatter from the walls and there is no scattering within the system. Acoustic cavities are metal enclosed cavities where a microphone is used to emit the waves. The underlying physics of all these systems is seemingly different. For example, in acoustic cavities, the waves are pressure differences in the air. In quantum cavities, the waves are the particles themselves. Nevertheless, the wave nature of these phenomena is often equivalent because it is governed by the Helmholtz equation (Stöckmann, 1999). Although there has been much progress within the fabrication of heterostructures, where quantum mechanical phenomena can be directly observed and measured, they are still difficult to manipulate. The different wave analogues are therefore important supplements for the experimental realization of quantum structures since they are easier to handle. Optical and laser cavities have recently been used for studying the concept of space-time reflection symmetry (Brandstetter et al., 2013; Liertzer et al., 2012) developed by Bender, Boettcher and Meisinger (Bender & Boettcher, 1998; Bender, Boettcher, & Meisinger, 1999) as an extension of Hermitian quantum mechanics. Acoustic and microwave cavities have been used to study exceptional points (Dembowski, et al., 2001; Ding, Ma, Xiao, Zhang, & Chan, 2016) which are points where the eigenvalues of two states are equal and their corresponding eigenvectors just differ by a phase (Rotter, 2009). Microwave cavities have also been important in the development of quantum chaos (Sadreev & Berggren, 2005; Stöckmann, 1999); the study of quantum systems that in the classical limit is chaotic i.e. a change in the initial conditions leads to exponential divergence of the trajectories in phase space.
Earlier studies, especially of quantum structures, have focused on two-dimensional fields because they are easier to study experimentally. However, a more realistic theoretical model would need to take all three dimensions into account (Ferry, Goodnick, & Bird, 2009). In this paper, we model three-dimensional quantum billiards with different boundary conditions and study the structure of the probability current for states of different excitations. Because different boundary conditions in the modeling were employed, the results are also applicable to the other wave analogues. The probability current is a three-dimensional vector field, which is difficult to visualize. Therefore, we also study the nodal surfaces and nodal lines. Nodal surfaces are surfaces where either the real or the imaginary part of the wave function is zero and the nodal lines are the intersection between the nodal surfaces (i.e., where both the real and imaginary parts are zero). The current will create vortices around these lines (Dirac, 1931; Wyatt, 2005). If the distribution of vortices is known, the overall structure of the current is known. The appearance and location of vortices have been directly connected to minima in the conductance i.e. the transmission through two-dimensional quantum structures (Lundberg, Sjöqvist, & Berggren, 1998) and the vortex distribution can be used to determine whether a system is chaotic (Saichev, Berggren, & Sadreev, 2001).
To map the probability current, a quasi-analytic method based on separation of variables and the finite difference method (FDM) is used. The ordinary time-dependent Schrödinger equation (Merzbacher, 1998) for the wave function is
Since the potential function V(r) does not depend time, we may write the wave function as
The solutions can be expressed as Fourier series. Depending on the boundary conditions, the series will contain sine, cosine or both. For example, a box with side lengths a, b, and c, all with Dirichlet boundary condition (i.e., on the boundaries), has the series
where the square root is due to normalization and are Fourier coefficients. If the boundary condition is changed to Neumann in one direction (i.e., instead of putting the wave function to zero at the boundaries in this direction, we set the derivative of the wave function to zero) the sine factor in this direction is changed to cosine.
Conservation of electrical charge entails that if the amount of charge in a system is changed there must exist a current responsible for this change. Similarly, probability current is obtained due to the conservation of probability; if the probability density of the system is changed there must exist a probability current mediating this change. For both probability and charge, the continuity equation reads,
where is the density function and j is the current vector. Note that both and j are time-independent because of [Eq. (2)]. From [Eq. (6)] and [Eq. (3)], one obtains the following expression for the probability current:
The Fourier series in [Eq. (5)] with sine and/or cosine terms can express the solution for different boundary conditions. From this expression, the nodal surfaces are easily found by standard routines (iso-surface) in MATLAB (MATLAB, 2012). Because of the simplicity of calculating the nodal surfaces, we will mostly visualize them throughout this work. Finding the nodal lines, which are more directly connected to the current, is a much more difficult problem. However, it is analytically solvable for some simple cases. One of the easiest cases arises when the Fourier series only contains two terms, one real and one imaginary, such that one of the terms contains two trigonometric factors in the ground state and only one excited factor and vice versa for the other term. That means, for the case with Dirichlet boundary condition, that two of the numbers (say n and l) is equal to one while m > 1, and n, l > 1 and m = 1 for the other, i.e.,
With Neumann boundary condition all the numbers are decreased by one. This case is solvable because the nodal surfaces will be orthogonal sheets with straight orthogonal intersections (Figures 1 and 2). The position of these lines can be found directly by reading the wave function. For more complicated cases the nodal lines can be found by an algorithm presented by Ljung and Ynnerman (2003). To obtain a clearer picture of the structure of the nodal surfaces, we 3D-printed the surfaces for a chaotic wave function.
Neumann Boundary Condition
We begin with a state under Neumann boundary condition
This represents one of the cases previously mentioned in which the nodal lines can be found directly because of the simple structure of the nodal surfaces (Figure 1A). There are six nodal lines to this state: three in the x-direction and three in the y-direction (Figure 2). The nodal lines in the x-direction are parameterized by (x, b(1+2n)/6, c/2) where n = 0,1,2. The nodal lines in the y-direction are parameterized similarly. Vortices are observed around these lines (Figure 2B-D).
Dirichlet and Mixed Boundary Conditions
Here we study states corresponding to the same state of excitation as the state in [Eq. (9)] but now with Dirichlet and mixed boundary condition. The corresponding state with Dirichlet boundary condition is
Note that because of the boundary condition, the wave function is zero on every side of the box. For the sake of visualization, these parts of the nodal surfaces have been removed (Figure 3A). The nodal lines for this state in the x-direction are parameterized by (x, bn/4, c/2) with n = 1,2,3 and similarly in the y-direction. The current propagates in vortices around these lines (Figure 3B-D).
We choose the mixed boundary condition such that it is Dirichlet in the x- and y-directions and Neumann in the z-direction. This state is given by
and the nodal surfaces and current are shown in (Figure 4A-D).
To represent the structure of higher order states, we visualize three states with higher excitations (Figure 5A-C). These states are given by [Eq. (12)], [Eq. (13)] and [Eq. (14)] respectively and each has one of the studied boundary conditions. The state given by
is with Neumann boundary condition and
is with Dirichlet boundary condition. The last state is with mixed conditions.
Using more terms in the linear combination the surfaces becomes more complex. For example:
For this state, the nodal surfaces are visualized both as animated figures and 3D-printed models (Figure 6A-D).
We simulate particle separation in an acoustic cavity for which the appropriate boundary condition is Neumann (Morse, 1948). To do the simulation, a very simple state with only one nodal line (Figure 7A) is employed
We now place two particles with mass ratio 1:10 at the top of the box near the nodal line. They are affected both by the gravitational force and a pressure force obtained by the current. The trajectories of the two particles are clearly different (Figure 7B). The heavy particle is much less affected by the current. Thus, in a scenario with many particles, the heavy particles would be found near the center of the box while the lighter particles would be located at the boundaries.
Discussion and Conclusion
The main objective of this work was to map the three-dimensional structure of probability currents and nodal surfaces in box cavities. By using different boundary conditions, the results obtained here are not only relevant for quantum cavities but also for microwave and acoustic cavities. A chaotic state was visualized by 3D printing. We suggest that these currents can be used for particle separation in acoustic cavities.
Three states with the same order of excitation [Eq. (9)], [Eq. (10)], and [Eq. (11)] were studied. We conclude that although different boundary conditions are used, the structure of the current is similar (Figures 2, 3, and 4). The obtained current for Neumann boundary condition (Figure 2) shows full agreement with the results in the recent experimental study by Ohlin and Berggren (2016). The findings could be used to simplify the modeling of complex structures in quantum transport calculations. When calculating the transmission through a quantum system one should in principle take an infinite number of states into account. This is numerically not possible so the number of states must be truncated. For transmission calculations using an effective non-Hermitian Hamiltonian it has been shown that only the case with Neumann boundary condition is stable with a finite number of states (Pichugin, Schanz, & Seba, 2001). If the physical importance of the boundary conditions can be relaxed, effective non-Hermitian Hamiltonians could be used with Dirichlet boundary condition for cases when Neumann seems like the natural boundary condition (Lee & Reichl, 2010).
We simulated particle separation in an acoustic cavity and observed a clear separation of particles with mass ratio 1:10. However, the current model is only conceptual and in need of improvements. Future research should take into account the shape and spin of the particles. One should also consider that the particles are present in the cavity and hence affect the structure of the current. These observations should be included in the Navier-Stokes equations and thus a fluid dynamic description of the system is obtained.
We would like to thank Magnus Sethson and David Beuger at the Department of Management and Engineering at Linköping University for providing access to their 3D-printers.
Ahrén, M. (2013). Metal Oxide Nanoparticles for Contrast Enhancement in Magnetic Resonance Imaging. Doctoral dissertion, Linköping University. doi:10.3384/diss.diva-98693.
Bender, C. M., & Boettcher, S. (1998). Real Spectra in Non-Hermitian Hamiltonians Having PT-Symmetry. Physical Review Letters, 80(24), 5243-5246. doi:10.1103/PhysRevLett.80.5243.
Bender, C. M., Boettcher, S., & Meisinger, P. N. (1999). PT-symmetric quantum mechanics. Journal of Mathematical Physics, 40(5), 2201-2229. doi:10.1063/1.532860.
Berggren, K. F., & Ljung, P. (2009). Nature of streamlines for Berry-type wavefunctions in open 3D cavities. Mathematical modeling of wave phenomena, American Institute of Physics, 1106, 253-259. doi:10.1063/1.3117101.
Berggren, K. F., Yakimenko, I. I., & Hakanen, J. (2010). Modeling of open quantum dots and wave billiards using imaginary potentials for the source and the sink. New Journal of Physics, 12, doi:10.1088/1367-2630/12/7/073005.
Blümel, R., Davidson, I. H., Reinhardt, W. P., Lin, H., & Sharnoff, M. (1992). Quasilinear ridge structures in water surface waves. Physical Review A, 45(4), 2641-2644. doi:10.1103/PhysRevA.45.2641.
Brandstetter, M., Liertzer, M., Deutsch, C., Klang, P., Schöberl, J., Türeci, H. E., . . . Rotter, S. (2013). Reversing the pump dependence of a laser at an exceptional point. Nature Communications 5, doi:10.1038/ncomms5034.
Chen, C. C., Liu, C. C., Su, K. W., Lu, T. H., Chen, Y. F., & F, H. K. (2007). Statistical properties of experimental coherent waves in microcavity lasers: Analogous study of quantum billiard wave functions. Physical Review E, 75(4), doi:10.1103/PhysRevE.75.046202.
Dembowski, C., Gräf, H.-D., Harney, H., Heine, A., Heiss, W., Rehfeld, H., & Richter, A. (2001). Experimental Observation of the Topological Structure of Exceptional Points. Physical Review Letters, 86(5), 787-790. doi:10.1103/PhysRevLett.86.787.
Ding, K., Ma, G., Xiao, M., Zhang, Z., & Chan, C. (2016). Emergence, Coalescence, and Topological Properties of Multiple Exceptional Points and Their Experimental Realization. Physical Review X, 6(2), doi:10.1103/PhysRevX.6.021007.
Dirac, P. A. (1931). Quantised Singularities in the Electromagnetic Field. Proceedings of the Royal Society A, 133(821), 60-72. doi:10.1098/rspa.1931.0130.
Ferry, D. K., Goodnick, S. M., & Bird, J. (2009). Transport in Nanostructures. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511840463.
Lee, H., & Reichl, L. E. (2010). R-matrix theory with Dirichlet boundary conditions for integrable electron waveguids. Journal of Physics A: Mathematical and Theoretical, 43(40), doi:10.1088/1751-8113/43/40/405303.
Liertzer, M., Ge, L., Cerjan, A., Stone, A. D., Türeci, H. E., & Rotter, S. (2012). Pump-Induced Exceptional Points in Lasers. Physical Review Letters, 108(17), doi:10.1103/PhysRevLett.108.173901.
Ljung, P., & Ynnerman, A. (2003). Extraction of Intersection Curves from Iso-surfaces on Co-Located 3D grids. The Annual SIGRAD Conference. Special Theme – Real-Time Simulations (pp. 23-28). Linköping: Linköping University Electronic Press. doi:10.1.1.114.510.
Lundberg, T., Sjöqvist, E., & Berggren, K.-F. (1998). Analysis of electron transport in a two-dimensional structure using quantal trajectories. Journal of Physics: Condensed Matter: Condensed Matter, 10(25), 5583-5594. doi:10.1088/0953-8984/10/25/010.
MATLAB. (2012). version R2012a. Natick, Massachusetts: The MathWorks Inc.
Merzbacher, E. (1998). Quantum mechanics. New York: Wiley.
Morse, P. M. (1948). Vibration and Sound. McGraw-Hill.
Ohlin, K., & Berggren, K. F. (2016). Patterns beyond Faraday waves: observation of parametric crossover from Faraday instabilities to the formation of vortex lattices in open dual fluid strata. European Journal of Physics, 37(4), doi:10.1088/0143-0807/37/4/045803.
Panda, P., & Hazra, G. (2014). Boundary perturbations and the Helmholtz equation in three dimensions. The European Physical Journal Plus, 129(53), doi:10.1140/epjp/i2014-14053-y.
Pichugin, K., Schanz, H., & Seba, P. (2001). Effective coupling for open billiards. Physical Review E, 64(5), doi:10.1103/PhysRevE.64.056227.
Rotter, I. (2009). A non-Hermitian Hamiltonian operator and the Physics of open quantum systems. Journal of Physics A: Mathematical and Theoretical, 42(15), doi:10.1088.1751.822.214.171.124001.
Sadreev, A. F., & Berggren, K.-F. (2005). Signatures of quantum chaos in complex wavefunctions describing open billiards. Journal of Physics: Mathematical and Generla, 38(49), 10787-10804. doi:10.1088.0305-4470.38.49.019.
Saichev, A. I., Berggren, K.-F., & Sadreev, A. F. (2001). Distribution of nearest distances between nodal points for the Berry function in two dimensions. Physical Review E, 64(3), doi:10.1103/PhysRevE.64.036222.
Stöckmann, H. J. (1999). Quantum chaos. Cambridge: Campridge University Press.
Wyatt, R. E. (2005). Quantum Dynamics with Trajectories. Springer.
Access Full PDF: Probability Current and a Simulation of Particle Separation
A Wear Evaluation of Ultra High Molecular Weight Polyethylene (UHMWPE) against Nanostructured Diamond-Coated Ti-6Al-4V Alloy
Reducing the wear of joint replacements would increase the lifespans of both the replacement and the patient. In this study, the wear of ultra-high molecular weight polyethylene (UHMWPE) against nanostructured diamond (NSD)-coated titanium alloy (Ti-6Al-4V) and uncoated cobalt-chromium alloy (CoCr) hemi-cylinders was compared to determine if the NSD coating results in less volumetric wear of polyethylene (PE). A wear simulator was used with a gait cycle to mimic the knee joint, but with an axial force ranging from 30N to 700N during the cycle. Both tests ran for 1.5 million cycles while immersed in bovine serum. The roughness and volumetric wear of the NSD-coated alloy were greater than the non-coated control sample. No significant differences in the PE surface structure hardness were observed for either wear couple, as measured using Raman spectroscopy, X-ray diffraction, and nanoindentation. Although the roughness of the PE (worn by NSD-coated alloy) decreased faster than that of the control PE surface (worn by CoCr), the as-deposited surface roughness of the NSD coating was about three to four times higher than that of the starting CoCr surface. These results suggest that a much higher abrasive wear occurred for the NSD-PE couple due to the high NSD surface roughness, which also accounted for its inferior wear performance. Due to the higher initial surface roughness of the NSD-coated hemi-cylinder, the wear of the NSD-PE couple is greater than the non-coated couple, indicating that this combination would not increase the lifespan of a replacement joint.
The number of total knee replacements in the US has doubled in the last ten years (Salomon et al., 2010; Weinstein et al., 2013). In addition, the average age of patients receiving a total knee replacement (TKR) is decreasing (Weinstein et al., 2013). It is becoming increasingly important to determine how to reduce wear and degradation of implanted joints in order to improve their long-term performance (Smith, Dieppe, Porter, & Blom, 2012). Most TKRs contain an ultra-high molecular weight polyethylene (UHMWPE) tibial component that articulates against a metallic alloy femoral component (Ritter, 2009). These pieces are often attached to the bone with bone cement but this method of fixation has the potential to create inflammatory debris and wear-induced periprosthetic osteolysis (Ritter, 2009).
It is essential that TKRs be made to last longer than they do now, especially given the fact that the average human lifespan is projected to increase significantly over the next 100 years (Weinstein et al., 2013). In a 15-year survivorship study (Ranawat et al., 1993), only 70.6% of patients who weighed more than 80kg survived. If implant wear can be decreased, the life expectancy of people weighing over 80kg could potentially increase. Assuming humans continue getting TKRs around the age of 65 (Daniilidis & Tibesku, 2012), the knee replacement must last at least 20 years, ideally 30. To enhance the lifespan of the knee replacement, the current focus has been on improving the UHMWPE spacer, which simulates the articular cartilage that allows for smooth movement of the femur and the tibia. It has been shown that polyethylene (PE) wear particles (Amstutz, Campbell, Kossovsky, & Clarke, 1992; Ritter, 2009; Teeter, Parikh, Taylor, Sprague, & Naudie, 2007) generated from the PE spacer causes osteolysis and possible loosening of the tibial plate (Willert, Bertram, & Buchhorn, 1990). The purpose of this study is to reduce this wear.
Third-body wear (the introduction of hard particles in the space between two articulating members), has been shown to increase the roughness of the cobalt-chromium (CoCr) femoral component and PE wear (Davidson, 1993; Lawson, Catledge, & Vohra, 2005; Pierannunzii, Fischer, & D’Imporzano, 2008; Wang & Essner, 2001). A study done by Wang and Essner (2001) showed that loose poly-methyl-methacrylate (PMMA) bone cement particles in the lubricant, in excess of 5g/L, adhere to CoCr femoral heads and lead to accelerated wear of the acetabular cups. In contrast, the attachment of PMMA particles to ceramic heads was much reduced, resulting in an UHMWPE wear rate that was independent of the concentration of the PMMA particles.
To minimize third-body and other mechanisms of wear from shortening the lifespan of a knee replacement, a nanostructured diamond (NSD) or amorphous carbon coating on the femoral component can be used (Amaral et al., 2007). The tribological benefits may be attributed to: (1) superior lubricating properties (more wettable, hence better able to maintain lubricant on the surface), (2) high hardness and (3) relative inertness of the material. These characteristics can provide a decrease in the coefficient of friction at the bearing surface, with less susceptibility to third-body wear and scratching, as well as less biological response to any debris generated by ceramic wear particles (Lawson et al., 2005; Zietz, Bergschmidt, Lange, Mittelmeier, & Bader, 2013). Recently, it has been shown that the lubricity of amorphous carbon arises from shear induced strain localization, which dominates the shearing process (Pierannunzii et al., 2008). This lubricity is characterized by covalent bond reorientation, phase transformation and structural ordering in a localized tribolayer region. A transition in movement from stick-slip friction to continuous slipping, with ultra-low friction, is observed due to gradual clustering and layering of graphitic sheets in the tribolayer. This enhanced lubricity potentially offered by a variety of carbon-containing coatings may reduce wear in total joint prostheses.
In a previous pin-on-disk study (Hill et al., 2008), we showed that a NSD-coated Ti-6Al-4V disk surface resulted in a factor of two less wear of the PE pin when compared to a non-coated CoCr disk. However, this study was not representative of the gait cycle experienced by the knee joint. In the present study, we compare the effects of wear of an NSD-coated titanium alloy hemi-cylinder to a non-coated CoCr control using a multi-axis wear simulator to more closely mimic the knee joint. We hypothesize that the NSD-coated hemi-cylinder will cause less wear compared with the non-coated hemi-cylinder due to more favorable lubricity and reduced friction conditions.
For wear-testing, an AMTI Force 5 machine (Advanced Mechanical Technology, Inc. Watertown, MA) was used by applying a cyclic vertical load ranging from 30N to 700N on the flat PE samples shown in Figure 1. The tests followed ISO (International Organization for Standardization) standard 14243-3 (ISO, 2014). However, the axial force was scaled down during the gait cycle from a maximum of 2600N (as stated in the standard) to 700N, due to limitations of our load cell capacity. Flexion movement (rotation of flexion arm around a horizontal axis), anterior and posterior movement, and rotation of the stage were incorporated into the cyclic waveform representative of the knee gait cycle, according to the ISO standard. Approximately 1.5 million cycles at 1Hz were performed on each PE sample using the uncoated CoCr and the NSD-coated hemi-cylinder. For the uncoated upper piece, the CoCr hemi-cylinder was attached to a rotating flexion arm using a thin layer of bone cement. For the NSD-coated piece, a titanium hemi-cylinder of the same dimensions was used as the substrate for coating. This hemi-cylinder was attached to the flexion arm using two stainless steel bolts.
To determine the effect of the wear testing, four PE samples were measured: two controls (soaked for an equivalent time in bovine serum but not wear-tested) and two wear-test samples. The wear-test samples were imaged using atomic force microscopy (AFM) and weighed every 250 thousand cycles. All samples were kept completely submerged in a bovine serum mixture at 37°C for the duration of the 1.5 million cycles. The serum mixture (consisting of 0.2%w/v of Sodium Azide (0.8 g), 20mM of EDTA (2.98mL), 100mL of bovine serum, and 300mL of deionized water) was replaced every 750 thousand cycles. To counteract evaporation, 100mL of deionized water was added every day to the machine reservoir. The control samples were kept in a sealed container, without adjusting volume.
To obtain the weight of the wear-test samples, a cleaning procedure was followed based on ASTM (American Society for Testing and Materials) F732 “Standard Test Method for Wear Testing of Polymeric Materials Used in Total Joint Prostheses” (ASTM 2011). The samples were retrieved from the machine and rinsed with water. Extraneous particles were removed with lens paper. Next, samples were sonicated for 15 minutes in 100mL of tap water and 1mL of liquid cleaner. After rinsing in deionized water, sonication procedure was repeated for an additional five minutes in 100mL of deionized water. Following sonication in deionized water, the samples were soaked in methanol bath for three minutes. Next, the samples were placed in a desiccator for 30 minutes. Finally, samples were weighed and imaged to determine changes in mass and surface roughness.
Atomic Force Microscopy (AFM)
AFM imaging was done to determine how the surface roughness of the PE samples was changing over the 1.5 million cycles. Imaging was performed using close-contact mode at a scan rate of 0.45Hz with 256 points per line. Four AFM images were taken in various regions of the PE sample with scan areas of 10um2 and 30um2. Surface roughness values were obtained using Scanning Probe Image Processor (SPIP) 5.1.1 (Image Metrology A/S, Hørsholm, Denmark) and MS Excel. Optical microscope images of the wear samples were also taken at the start and end of the 1.5 million cycles. The wear-test samples and control samples were weighed at the same intervals to account for mass gain from serum absorption. The overall change in roughness of the PE samples was recorded. AFM was also done on the surface of the hemi-cylinders before and after wear.
In order to calculate volumetric wear, the mass of the wear-test sample was corrected by subtracting the mass absorption of the control sample from that of the wear-test sample measured at the same interval. The initial mass measurement (M0) was subtracted from each consequent measurement (Mm) to acquire the mass difference (Mc). The change observed, true mass (Mt), of the control sample was then subtracted from the change in the mass of wear-test sample. The wear volume (mm3) was calculated from Mt using the density of UHMWPE (0.9363kg/cm3).
Nanoindentation was performed on the PE samples using a Berkovich diamond tip (nominal radius 50nm) to a depth of one µm. Indentation was done before and after wear testing to detect any changes in surface hardness or elastic modulus, potentially as a result of structural transformations.
X-ray diffraction (λ = 1.54154Å) was used on the PE sample surfaces to further examine potential effects of wear, such as phase transformations or texturing, from articulation against the NSD-coated and non-coated alloy.
Laser Raman spectroscopy (λ = 514.5nm) was performed on the control and wear-test samples to evaluate possible structural transformation (such as disordering of carbon bonds), as measured in the wavenumber range from 1000 to 1800cm-1.
In this study, we evaluated two samples: a control UHMWPE sample worn against CoCr (Sample 1) and a test UHMWPE sample worn against NSD-coated Ti-6Al-4V alloy (Sample 2). Nanoindentation performed on both samples before and after wear did not show a significant change in either hardness or Young’s modulus. Compared to the UHMWPE/CoCr couple, the NSD/UHMWPE couple produced higher volumetric wear. The volumetric wear of Sample one was 1.20mm3 after 250k cycles and reached a value of 3.72mm3 at the end of the 1.5 million cycles (Fig. 2). By comparison, the wear of Sample 2 after 250k cycles was 6.29mm3, which was already almost twice as much as the maximum wear volume of Sample 1. Note that the volumetric wear trends of these two samples are different. Sample 1’s volumes tapered off in the final measurement intervals whereas Sample 2 showed a consistent amount of wear occurring at each interval. Using a parabolic curve fit, the best fit gives an R2 value of 0.8476 for Sample 1. The linear fit of Sample 2 has an R2 value of 0.9945. Overall, the volumetric wear of Sample 2 is higher and more consistent than the wear of Sample 1.
The initial average roughness value of Sample 1 (PE worn by CoCr) was lower than the initial average roughness of Sample 2 (PE worn by the NSD-coated alloy) (Fig. 3). Sample 1’s initial roughness was 177.21 ± 42.23nm while Sample 2 had the initial roughness of 260.64 ± 69.74nm for an AFM scan area of 30µm2 (Fig. 3). The final roughness of Samples 1 and 2 were 27.32 ± 5.67nm and 11.13 ± 3.42nm, respectively. The overall roughness of Sample 1 decreased by 85% while the overall roughness of Sample 2 decreased by 96%. For an AFM scan area of 10µm2, Sample 1 had a decrease of 80% in overall roughness, which is 8% lower than that of Sample 2. The roughness of both PE samples dropped by more than 90% within the first 500 thousand cycles (as measured from an AFM scan area of 30µm2). The outlier at approximately 1 million cycles for the roughness of Sample 1 had an average value of 128.16nm with a relatively large standard deviation obtained from four separate measurements, the least of which was 51nm.
No measurably significant change in surface roughness was detected from either the uncoated CoCr or NSD-coated hemi-cylinders before and after wear (Fig. 4). However, it should be noted that all roughness values for the CoCr hemi-cylinder are below 8nm, while those of the NSD-coated alloy are about a factor of three larger. Finally, Raman spectroscopy revealed that no significant change in PE carbon bonding occurred after wear (Fig.5).
With the number of people receiving total knee replacements growing, as well as an overall increase in lifespan, the need for a longer-lasting total knee replacement is becoming more urgent. Out results indicated that the wear effect of an NSD-coated titanium hemi-cylinder on a PE sample was greater than the wear from a non-coated CoCr on a PE sample. The UHMWPE/NSD resulted in a smoother surface on the PE sample.
The volumetric wear and AFM roughness data both showed that more wear occurred for the UHMWPE/NSD couple compared to the control UHMWPE/CoCr couple. Given that the starting surface roughness of the NSD coating was three to four times higher than that of the CoCr surface, it suggested that excessive abrasive wear occurred for the UHMWPE/NSD wear couple. Our result showed the opposite result from a previous pin-on-disk study (Hill et al., 2008), which found that that less wear occurred for the UHMWPE/NSD couple compared to an UHMWPE/CoCr couple. In that study, the NSD coating was smoother by a factor of five and the loads/displacements were much less clinically relevant. In order to test the potential benefits of an NSD-coated counterface against UHMWPE for a total joint replacement, future efforts will need to be directed toward decreasing the NSD surface roughness to a value at least as small as the starting bare alloy. Otherwise, abrasive wear may dominate the wear mechanisms.
Figure 2 shows that, if extrapolated, the curved trend seen in the volumetric wear of Sample 1 can be expected to continue increasing at a slower rate. However, when observing the volumetric wear of Sample 2, no such trend is seen. Instead, a nearly linear trend is observed along with a high R2 value of 0.9991. If this trend were continued, it would most likely increase linearly, as it did for the past 1.5 million cycles.
Based on the results of this study, the NSD-coated alloy, with its relatively high surface roughness compared to the CoCr alloy, would cause unacceptable wear of the PE inserted in an artificial joint, leading to a high probability of early implant failure. As shown in Figure 2, the volumetric wear of the control PE (Sample 1) appears to drop off near the end of the 1.5 million cycles. This trend is not seen with the test PE (Sample 2). Instead, the wear rate is consistent for the entire 1.5 million cycles. The average roughness for PE Sample 2 (Figure 3) shows that the roughness rapidly decreases within the first 500 thousand cycles and remains nearly unchanged after that. Therefore, we cannot assume that the mass loss is proportional to a decrease in surface roughness. Instead, it is more likely that abrasive wear caused by the NSD surface initially removed asperities from the UHMWPE (whose initial roughness was several hundred nanometers) and then continued to wear this surface at a constant rate. Since the NSD surface is a factor of three to four rougher than the CoCr, one would expect abrasive wear to be higher for this wear couple. For the UHMWPE/CoCr couple, the apparent plateau in wear may be explained by improved lubricity at the metal/polymer interface as the polymer surface roughness drops. Though the roughness changed greatly throughout the wear testing, both Raman spectroscopy and x-ray diffraction (XRD) showed no significant changes between the control and test PE samples.
An outlier in measured surface roughness from Sample 1 was observed at the 4th measurement interval (near 1 million cycles). At this point in the wear-test, the roughness is not uniform throughout the contact surface areas. One of these less-smooth areas could have been chosen for collecting these data rather than the typical smoother locations used in the other experiments. The four average roughness values obtained for this measurement had a very large range. It is not clear why these measurements resulted in such large spread. Since the surfaces of the alloys were not of the same roughness at the beginning, this could have also affected how much polyethylene was worn during the 1.5 million cycles. To maintain consistency, both alloy surfaces should be either polished or chosen to have the same roughness. Given the limitations of this study, the overall wear of polyethylene from the non-coated alloy was nearly a factor of five less than that of the NSD-coated alloy.
The NSD-coated hemi-cylinder resulted in nearly five times more PE wear than that from the non-coated CoCr hemi-cylinder while the average PE roughness decreased much faster for Sample 2 (involving the NSD counterface) than for Sample 1 (involving the non-coated CoCr counterface). This suggests a more aggressive abrasive wear-in period in the beginning due to the higher surface roughness of NSD. To determine how significant the wear rate is, more PE samples should be tested. This will help ensure reproducibility as well as reduce random errors.
The surface of the NSD-coated hemi-cylinder itself showed no appreciable changes in average roughness before and after wear. For both control and test PE samples, no change in mechanical properties (hardness or elastic modulus as measured by nanoindentation) or in carbon bond structure (as measured by Raman spectroscopy) were detected. Both XRD and nanoindentation should be done on the sample worn by the NSD-coated hemi-cylinder in the future. These techniques could help detect changes in the PE crystallinity as well as possible alignment of the PE fibers. Although this data suggests that an UHMWPE/NSD couple is inferior, the influence of starting surface roughness must be considered as a limiting factor. All counterface surfaces should have similar and low initial roughness values to ensure reproducibility. This could be achieved through polishing. We hope to achieve a similar surface roughness for both counterfaces by improving the NSD coating process. The NSD-coating needs to be have reduced roughness and improved consistency over relatively large surface areas. In the future, reduction of NSD coating surface roughness should be a primary goal for a better comparison to the control wear couple.
Although this study evaluated an UHMWPE/NSD couple, an alternative couple that could lead to more promising results would involve NSD/NSD. In this way, the NSD-coating may be expected to minimize abrasion while acting as a barrier to prevent leaching of potentially toxic heavy metal ions from the substrate. The applications for this coating are more suitable for artificial hip joints (which also employ hard-on-hard bearings). Perhaps, for a ceramic-on-ceramic hip replacement, an NSD coating could even further increase the longevity of implants (Bhatt & Goswami, 2008; Hill et al., 2008; Ranawat et al., 1993).
The authors greatly appreciate the support provided by the UAB Research Experiences for Undergraduates (REU) program funded by the National Science Foundation Grant DMR 1460392.
Amaral, M., A. Dias, P. Gomes, M. Lopes, R. Silva, J. Santos, & M. Fernandes. (2007). Nanocrystalline Diamond: In Vitro Biocompatibility Assessment by MG63 and Human Bone Marrow Cells Cultures. Journal of Biomedical Materials Research, 87A(1), 91-99. http://dx.doi.org/10.1002/jbm.a.31742
Amstutz, H., Campbell, P., Kossovsky, N., & Clarke, I. (1992). Mechanism and Clinical Significance of Wear Debris-Induced Osteolysis. Clinical Orthopaedics and Related Research. 276, 7-18. http://journals.lww.com/corr/Abstract/1992/03000/ Mechanism_and_ Clinical_Significance_of_Wear.3.aspx
ASTM F732-00. Standard test method for wear testing of polymeric materials used in total joint prostheses. (2011). ASTM International, West Conshohocken, PA. www.astm.org
Bhatt, H., & Goswami, T. Implant wear mechanisms—basic approach. (2008). Biomedical Materials, 3(4), 109-109. http://dx.doi.org/10.1088/1748-6041/3/4/042001
Daniilidis, K., & Tibesku, C. O. (2012). Frontal plane alignment after total knee arthroplasty using patient-specific instruments. International Orthopaedics, 37(1), 45-50. http://dx.doi.org/10.1007/s00264-012-1732-1
Davidson, J. (1993). Characteristics of Metal and Ceramic Total Hip Bearing Surfaces and Their Effect on Long-Term Ultra High Molecular Weight Polyethylene Wear. Clinical Orthopaedics and Related Research, 294, 361-378. Retrieved from http://journals.lww.com/corr/Abstract/1993/09000/Characteristics_of_Metal_and_Ceramic_ Total_Hip.53.aspx
Hill, M. R., Catledge, S. A., Konovalov, V., Clem, W. C., Chowdhury, S. A., Etheridge, B. S., Stanishevsky, A., Lemons, J. E., Vohra, Y. K. and Eberhardt, A. W. (2008). Preliminary tribological evaluation of nanostructured diamond coatings against ultra-high molecular weight polyethylene. Journal of Biomedical Materials Research 85B: 140–148. http://dx.doi.org/10.1002/jbm.b.30926
ISO 14243-3. Implants for surgery – wear of total knee-joint prostheses. (2014). International Organization for Standardization (ISO). www.iso.org
Lawson, T., Catledge, S., & Vohra, Y. (2005). Nanostructured Diamond Coated CoCrMo Alloys for Use in Biomedical Implants. Key Engineering Materials (KEM), 284-286, 1015-1018. http://dx.doi.org/10.4028/www.scientific.net/KEM.284-286.1015
Pierannunzii, L., Fischer, F., & D’Imporzano, M. (2008). Retroacetabular osteolytic lesions behind well-fixed prosthetic cups: Pilot study of bearings-retaining surgery. J Orthopaed Traumatol Journal of Orthopaedics and Traumatology, 9(4), 225-231. http://dx.doi.org/10.1007/s10195-008-0031-5
Ranawat, C. S., Flynn, W. F., Saddler, S., Hansraj, K. K., & Maynard, M. J. (1993). Long-term results of the total condylar knee arthroplasty. Clinical Orthopaedics and Related Research, 286(286), 94-102. http://dx.doi.org/10.1097/00003086-199301000-00015
Ritter, M. A. (2009). The anatomical graduated component total knee replacement: a long-term evaluation with 20-year survival analysis. The Bone & Joint Journal (British), 91-B(6), 745-749. http://dx.doi.org/10.1302/0301-620X.91B6.21854
Salomon, J. A., Wang, H., Freeman, M. K., Vos, T., Flaxman, A. D., Lopez, A. D., & Murray, C. J1. (2010). Healthy life expectancy for 187 countries, 1990–2010: a systematic analysis for the global burden disease study. The Lancet, 380(9859), 2144-2162. http://dx.doi.org/10.1016/S0140-6736(12)61690-0
Smith, A. J., Dieppe, P., Porter, M., & Blom, A. W. (2012). Risk of cancer in first seven years after metal-on-metal hip replacement compared with other bearings and general population: linkage study between the National Joint Registry of England and Wales and hospital episode statistics. British Medical Journal, 344(e2383). http://dx.doi.org/10.1136/bmj.e2383
Teeter, M., Parikh, A., Taylor, M., Sprague, J., & Naudie, D. (2007). Wear and Creep Behavior of Total Knee Implants Undergoing Wear Testing. The Journal of Arthroplasty, 15(1), 53-64. http://dx.doi.org/10.1016/j.arth.2014.08.001
Wang, A., & Essner, A. (2001). Three-body wear of UHMWPE acetabular cups by PMMA particles against CoCr, alumina and zirconia heads in a hip joint simulator. Wear, 250(1-12), 212-216. http://dx.doi.org/10.1016/S0043-1648(01)00643-3
Weinstein, A. M., Rome, B. N., Reichmann, W. M., Collins, J. E., Burbine, S. A., Thornhill, T. S., Wright, J., Katz, J. N., & Losina, E. (2013). Estimating the burden of total knee replacement in the United States. The Journal of Bone and Joint Surgery (American), 95(5), 385-392. http://dx.doi.org/10.2106/JBJS.L.00206
Willert, H., Bertram, H., & Buchhorn, H. G. (1990). Osteolysis in Alloarthroplasty of the Hip. Clinical Orthopaedics and Related Research, 258, 96-107. http://journals.lww.com/corr/Abstract/1990/09000/Osteolysis_in_Alloarthroplasty_of_the_ Hip__The.14.aspx
Zietz, C., Bergschmidt, P., Lange, R., Mittelmeier, W., & Bader, R. (2013). Third-body Abrasive Wear of Tibial Polyethylene Inserts Combined with Metallic and Ceramic Femoral Components in a Knee Simulator Study. The International Journal of Artificial Organs IJAO, 36(1), 47-55. http://dx.doi.org/10.5301/ijao.5000189
AIRE Deficiency Exposes Inefficiencies of Peripheral Tolerance Leading to Variable APECED Phenotypes
Autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) is a rare, recessive disease caused by mutations in the autoimmune regulator (AIRE) gene. A loss of function at the AIRE locus is widely known to induce autoimmune activation against host tissues due to lack of central tolerance during thymic T cell development. Failure to delete autoreactive T cell clones allows their release into the periphery, where they may proliferate and initiate an autoimmune response. While APECED is a monogenic disorder, disruption of AIRE function can have diverse implications: similar mutations in AIRE can lead to a myriad of phenotypes and symptoms. By investigating the multiple ways AIRE function can be compromised, recent research has uncovered the steadfast mechanisms explaining how AIRE is expressed in mTECs, how AIRE transactivates tissue-specific antigens (TSAs), and how those TSAs are presented to T cells by both medullary thymic epithelial cells (mTECs) and bone marrow-derived antigen-presenting cells. However, the stochastic nature of APECED symptoms remains. Therefore, new approaches to APECED therapy should investigate the intersection of pragmatism and randomness inherent in the relationship between central and peripheral tolerance.
T cells provide capable, targeted defense against foreign antigens through their receptor specificity. The vast repertoire of T cell receptors allows the immune system to mount a response against most foreign invaders. Generation of receptor diversity is accomplished mainly through gene rearrangement at the alpha and beta chain loci.
Positive selection in the thymic cortex is able to expand T cell clones with receptors that bind major histocompatibility complex (MHC)/self-peptide complexes with at least moderate affinity (De Martino et al., 2013). However, cells that pass positive selection may still have a strong affinity for self-peptides presented on MHC molecules. In order to eliminate these autoreactive T cells from escaping from the thymus into the periphery, T cell clones positively selected for in the thymic cortex undergo negative selection in the thymic medulla. During the negative selection process, T cells are presented with medullary thymic epithelial cell (mTEC)-expressed tissue-specific antigens (TSAs) in the medulla (Derbinski, Schulte, Kyewski, & Klein, 2001; Kyewski & Derbinski, 2004). T cells that show strong affinity for these self-peptide/MHC complexes are deleted by activation-induced apoptosis. The deletion of autoreactive T cell clones through thymic-expressed TSAs is known as central tolerance.
The discrepancy between antigens expressed and presented by cortical thymic epithelial cells (cTECs) versus mTECs has been termed the alternate peptide hypothesis. This hypothesis can partially explain how autoreactive T cells survive positive selection in the cortex but fail to pass negative selection in the medulla (Marrack, McCormack, & Kappler, 1989). In order to express TSAs, mTECs must transactivate genes that are not normally expressed in the thymus through a process called promiscuous gene expression (PGE; De Martino et al., 2013; Kyewski & Derbinski, 2004; Laan & Peterson, 2013; Metzger & Anderson, 2011; Tykocinski, Sinemus, & Kyewski, 2008). PGE is dependent upon the transcription of DNA in chromatin states often associated with inhibited expression (Abramson, Giraud, Benoist, & Mathis, 2010; Tykocinski et al., 2010; Ucar & Rattay, 2015; Žumer, Saksela, & Peterlin, 2013). The autoimmune regulator (AIRE) protein expressed in mTECs is a transcription factor that facilitates this process.
Loss of AIRE function limits TSA tolerance, leading to organ-specific autoimmunity and autoantibody production (Kisand & Peterson, 2015; Laan & Peterson, 2013; Metzger & Anderson, 2011). Autoimmune polyendocrinopathy candidiasis ectodermal dystrophy (APECED) is the monogenic disorder caused by mutations at the AIRE locus. However, APECED may be considered a syndrome because symptoms can also stem from indirect disruptions of AIRE function (De Martino et al., 2013). While APECED cases may feature some similar symptoms such as mucocutaneous candidiasis, Addison’s disease, and hypoparathyroidism, AIRE’s role in maintaining central tolerance to most TSAs makes APECED patients susceptible to further autoimmune responses against a number of host tissues including the eyes, liver, pancreas, kidney, and sex organs (Kisand & Peterson, 2015; Kyewski & Derbinski, 2004). Therefore, even APECED patients with similar mutations at the AIRE locus may have dissimilar symptoms due to variation in specific self-antigen tolerance (De Martino et al., 2013).
Because disruption of AIRE function can occur in numerous ways, this review will discuss how disruptions in AIRE expression, PGE promotion, and TSA presentation can all instigate autoimmunity. Furthermore, this review will explain how the interplay between central and peripheral tolerance contributes to the variation seen in APECED phenotypes and symptoms.
mTEC Development, Epigenetic Profile, and miRNA Govern AIRE Expression
Expression of AIRE is vital to expression of self-antigens in the thymus. AIRE is predominantly expressed in mTECs, although other cell types in the periphery and thymus have been shown to express AIRE at low levels (Derbinski et al., 2001; Metzger & Anderson, 2011). While mTEC lineage cells are primarily responsible for PGE, only 1-5% of mTEC cells express TSAs at a given time (Gallegos & Bevan, 2004). It is likely that factors unlinked to AIRE expression levels, such as changes in signaling states within the medullary microenvironment, alter PGE in AIRE+ mTECs. Nonetheless, because AIRE plays a direct role in PGE, disruption of AIRE expression leads to autoimmune phenotypes.
Problems with AIRE expression in mTECs may arise from impediments to mature mTEC development. Immature mTECs begin at the MHC IIlow, CD80low, AIRE- stage and mature to an MHC IIhigh, CD80high, AIRE- stage and then an MHC IIhigh, CD80high, AIRE+ stage, at which point AIRE-dependent and AIRE-independent antigens can be expressed (Metzger & Anderson, 2011). While this three-step maturation process represents the development needed for mTECs to activate PGE, mTECs have recently been shown to lose their AIRE+ phenotype during a fourth, and final, maturation stage. This loss of AIRE expression is coupled with a loss of TSA expression in these mature mTECs (Laan & Peterson, 2013; Yano et al., 2008). Therefore, while the paucity of TSA expression by the total mTEC population likely hinges on many factors, one possible contributor may be the finite timeframe in which AIRE is expressed by maturing mTECs.
While AIRE propels mTEC maturation, disruptions to early mTEC development stunt AIRE expression. AIRE-deficient mice produce malformed thymi, which illustrates the important role AIRE plays in thymic formation and mTEC development (Yano et al., 2008). However, failure of immature mTECs to pass proper developmental stages can have a substantial impact on AIRE expression and PGE in the thymus. For example, Rossi et al. (2007) show that RANK signaling from CD4+, CD3- cells facilitates mTEC development and promotes AIRE+ phenotypes. Absence of RANK signaling was shown to incite autoimmunity (Rossi et al., 2007). Later experiments in vivo uncovered that RANK signaling regulates AIRE function by promoting its accumulation within chromatin-associated nuclear bodies (Ferguson et al., 2008).
Failure to express AIRE may also stem from improper epigenetic markers at the AIRE locus. One epigenetic marker that influences gene expression is methylation of DNA. Hypermethylation of DNA can occur in contiguous regions, such as CpG islands, and is associated with low expression rates. Bisulfate sequencing of CpG islands near the AIRE promoter revealed hypomethylation surrounding the AIRE promoter in AIRE+ mTECS. However, these hypomethylation markers were also found in AIRE- immature (MHC IIlow) mTECs and cTECs, illustrating that AIRE promoter methylation likely has little effect on AIRE expression (Kont et al., 2011; Ucar & Rattay, 2015).
Alterations in the packing and chemical modification of chromatin can also dictate levels of gene expression. DNA is packed into nucleosomes, which contain DNA looped around octamers of histone proteins, similar to beads on a string. Tightly packed chromatin (heterochromatin) can sterically hinder transcriptional machinery from accessing promoters and other DNA sequences, making heterochromatin states unreceptive to gene expression. Conversely, loosely packed chromatin (euchromatin) is permissive of transcriptional machinery and gene expression. Furthermore, methylation of lysine residues on individual histone proteins can also promote or repress transcription. Histone profiling at the AIRE promoter in AIRE+ mTECs showed increased amounts of transcriptionally active histone marks (H3K4me3) and lower amounts of repressive histone marks (H3K27me3) than other cell types (Kont et al., 2011). These epigenetic patterns illustrate that AIRE expression is correlated with histone modifications at the promoter region. Failure to properly mark specific histone residues at the AIRE promoter may cause epigenetic silencing of AIRE, leading to decreased PGE and autoimmune phenotypes.
Expression of AIRE may be further regulated by miRNA interactions. Research by Ucar, Tykocinski, Dooley, Liston, and Kyewski (2013). revealed that miRNAs are tightly regulated in developing mTECs. In addition, mice lacking Dicer function showed loss of AIRE expression and reduced PGE, which demonstrates that miRNA regulates AIRE expression and function (Ucar et al., 2013). Therefore, loss of AIRE function may stem from an inability of miRNA to regulate AIRE activity.
Disruptions of AIRE or Other Promiscuous Gene Expression Mediators Yield Autoimmunity
AIRE contributes to central tolerance by enabling the expression of self-antigens within mTECs through PGE. Promoting TSA expression is a complex process, and AIRE is able to facilitate PGE through its unique protein domains, which allow for subcellular localization and interaction with other proteins that assist in the transcription and processing of TSAs (Abramson et al., 2010; De Martino et al., 2013; Gallo et al., 2013; Ramsey, Bukrinsky, & Peltonen, 2002). Taken together, mutations in the AIRE locus compromise the function of AIRE protein domains and lead to nonfunctional PGE.
In order to facilitate the transcription of TSAs not canonically expressed in the thymus, AIRE must localize to genes that are epigenetically repressed. AIRE protein domains allow it to access repressive chromatin states and transactivate TSA expression. For example, a dominant missense mutation in the SAND domain inhibited PGE in heterozygous mice by impeding localization of AIRE proteins encoded by both alleles to nuclear bodies. This mutation was sufficient to prompt an autoimmune phenotype (Su et al., 2008). Mutations in the CARD domain limited AIRE homodimerization and nuclear localization in vitro (Ferguson et al., 2008; Metzger & Anderson, 2011). Mutations leading to elimination of the AIRE C-terminus barred TSA expression by preventing AIRE from interacting with positive transcription elongation factor B (P-TEFb; Žumer, Plemenitaš, Saksela, & Peterlin, 2011). Synthetic mutations in the PHD domain revealed that the BHC80 region of AIRE’s PHD1 domain is vital for localization to nucleosomes. The PHD1 domain is a protein-binding zinc finger, which can bind hypomethylated H3K4, a traditionally repressive histone mark, in order to allow transcription within regions of heterochromatin (Anderson & Su, 2016). While AIRE binding of hypomethylated histone H3 tails was necessary for PGE, overexpression of H3K4-demethylase did not increase PGE, indicating that AIRE’s targets other epigenetic modifications (Koh, Kingston, Benoist, & Mathis, 2010). This hypothesis was supported by Waterfield et al. (2014), who used a screening approach to demonstrate that AIRE interacts with MBD1on its SAND domain. MDB1 is able to bind methylated CpG dinucleotides, which allows AIRE to localize to genes located within hypermethylated CpG islands (Waterfield et al., 2014).
Subcellular localization of AIRE to epigenetically-repressed sites via its protein domains is necessary for AIRE to facilitate the transcription of TSA genes. However, further protein-protein interactions also contribute to TSA transactivation. Because of AIRE’s integral role in facilitating a process that breaks conventional guidelines of gene regulation, it may be assumed that AIRE acts as a “pioneer protein,” which recruits RNA Polymerase II to TSA loci amidst a jumble of heterochromatin and other repressive epigenetic marks. However, Giraud et al. (2012) showed that the absence of AIRE did not inhibit expression of the first exon in AIRE-targeted genes. This illustrates that AIRE is not necessary for RNA Polymerase II to access epigenetically-repressed loci. Instead, RNA Polymerase II can be recruited to these sites by DNA-Dependent Protein Kinase (DNA-PK) in response to double stranded breaks caused by Topoisomerase II activity. RNA Polymerase II is then able to begin transcription of the first exon, but elongation is halted by negative elongation factors. AIRE also interacts with DNA-PK, which allows it to co-localize with RNA Polymerase II. After co-localization, AIRE’s interaction with P-TEFb prompts RNA polymerase II phosphorylation and transcriptional elongation (Žumer et al., 2013). Therefore, instead of initiating transcription at TSA loci, AIRE works to promote TSA transcription by unleashing RNA Polymerase II in order to transcribe downstream exons (Giraud et al., 2012). AIRE localization to double stranded break repair sites via DNA-PK provides a viable explanation to how AIRE accesses epigenetically-repressed TSA loci. However, as illustrated above, mutations to multiple AIRE protein domains have also been shown to inhibit subcellular localization and provoke autoimmune phenotypes. Further research will need to define whether these various methods of TSA localization work in tandem or in isolation to induce PGE.
AIRE also regulates TSA output via post-translational mRNA splicing (Kyewski & Derbinski, 2004; Žumer et al., 2011). mTECs had the greatest amount of alternatively spliced isoforms compared to any other cell type (Keane, Ceredig, & Seoighe, 2015). AIRE is thought to recruit splicing machinery in multiple ways. For example, splice factor snRNP is known to localize to nuclear bodies, (Sleeman & Lamond, 1999) where AIRE is also recruited via its SAND domain (Ramsey et al., 2002). Furthermore, Zumer et al. (2011) showed that snRNP subunit U5 was recruited by AIRE to the 3’ end of TSA transcripts. Therefore, AIRE utilizes co-localization with RNA Polymerase II to promote mRNA splicing (Žumer et al., 2011). AIRE is thought to perform mRNA splicing in order to tolerize autoreactive T cells specific to particular TSA isoforms, thus increasing the breadth of clonal deletion in the thymus (Keane et al., 2015; Kyewski & Derbinski, 2004). Therefore, disruption of mRNA splicing mechanics may limit the breadth of PGE expression, leading to autoimmunity of specific self-peptide isoforms.
mTECs and Thymic Dendritic Cells Present Tissue-Specific Antigens to Induce Tolerance
While TSA expression in mTECs is necessary for negative selection, central tolerance can be accomplished only if those TSAs are presented to T cells via MHC molecules. Therefore, failure to regulate TSA presentation on thymic cell types may undermine AIRE function and cause autoimmune phenotypes. Although mTECs have the proper surface molecules to initiate activation-induced apoptosis in both CD4+ and CD8+ T cells (Laan & Peterson, 2013), mTECs share these presenting responsibilities with thymic dendritic cells. These dendritic cells can be recruited to the thymus through the XC-chemokine ligand 1 (XCL1), a protein that is expressed by AIRE+ mTECs (Anderson & Su, 2016). It should be noted that thymic dendritic cells do not express AIRE and do not perform PGE (Derbinski et al., 2001). Therefore, mTECs serve as TSA reservoirs, and can selectively pass off PGE products to thymic dendritic cells for presentation (Gallegos & Bevan, 2004; Hubert et al., 2011; Metzger & Anderson, 2011).
By regulating thymic and bone marrow expression of ovalbumin (OVA) peptide and MHC I/II, respectively, in mouse models, researchers have investigated whether mTECs are self-sufficient at inducing autoreactive CD4+ and/or CD8+ T cell deletion through TSA presentation. TSAs produced by mTECs are intracellular proteins, and should therefore be canonically presented by MHC I to CD8+ T cells; mTEC presentation to CD4+ T cells would require cross-presentation of intracellular TSAs to MHC II. Gallegos and Bevan hypothesized that because mTECs were insufficient at antigen presentation, thymic dendritic cells were responsible for presentation to CD4+ and CD8+ T cells. Their results indicated that mTECs self-sufficiently induced CD8+ T cell tolerance to mOVA, but bone-marrow derived thymic dendritic cells were necessary for tolerance of mOVA-specific CD4+ T cells (Gallegos & Bevan, 2004). However, recent evidence has qualified those findings, asserting that mTECs are responsible for some TSA antigen via MHC II, but induction of CD4+ OVA tolerance is greatly diminished in mice with MHC II-deficient bone marrow (Hubert et al., 2011).
While inducing thymic expression of OVA through knock-in experiments spotlights the presentation responsibilities between mTECs and thymic dendritic cells for one non-self peptide, how presentation of numerous, specific TSAs is delineated between mTECs and thymic dendritic cells for comprehensive tolerance induction remains unknown. Research by Zhang et al. (2003) showed that soluble hen egg lysozyme(HEL) expression in the thymus produced more efficient negative selection of CD4+ thymocytes than membrane-bound HEL, suggesting mTEC secretion of peptides to thymic dendritic cells is important for tolerance induction. However, because autoreactive T cells are prone to interact with membrane-bound molecules on the surface of tissues, uncovering how shared presentation responsibilities ensure full tolerance to all self-peptides is an important step to uncovering more about negative selection mechanics.
Peripheral Tolerance Drives Variability in APECED Symptomatology
The disparities found in APECED symptomatology stem from the limited power of peripheral tolerance. Mechanisms of peripheral tolerance inactivate autoreactive lymphocytes that have escaped central tolerance during T lymphocyte development. For example, immature dendritic cells in the periphery are responsible for induction of tolerance to self-antigens under steady-state conditions (Hawiger et al., 2001; Mueller, 2010). Dendritic cells in both lymph nodes and the spleen can process, load, and present self-antigens from the periphery to T cells. Thus, expression of certain antigens in the periphery is sufficient to induce tolerance of those antigens (Derbinski et al., 2001). Additionally, certain dendritic cells express limited amounts of AIRE. These extra-thymic AIRE-expressing cells (eTACs) may provide additional tolerance in basal conditions by presenting AIRE-dependent self-antigens in the periphery (Metzger & Anderson, 2011; Mueller, 2010). eTACs lack costimulatory molecules CD80/86, which may induce anergy in T cells that recognize eTAC-presented peptides (Metzger & Anderson, 2011). However, it is likely that eTAC levels are minimal in APECED patients.
Peripheral tolerance is also formed by regulatory T cells (CD4+, FOXP3+, CD25+), which induce anergy to helper and cytotoxic T cells through direct interaction, releasing anti-inflammatory signals, and expending cytokines that potentiate T cell activation and proliferation. T cells in the thymus may be pushed to the thymic regulatory T cell lineage if they bind MHC/self-peptide complexes with strong affinity during negative selection in the thymus (Jordan et al., 2001). Induced regulatory T cells may be induced to undergo lineage commitment in the periphery through receptor activation and epigenetic change at the FOXP3 locus (Ohkura et al., 2012). Because AIRE deficiency hinders negative selection through dysfunctional PGE, APECED patients possess limited regulatory T cell populations, likely due to the inability to facilitate thymic regulatory T cell lineage commitment (Kekäläinen et al., 2007; Perry et al., 2014). However, induced regulatory T cells may play a role in muffling the autoimmune response in some tissues.
The role of peripheral tolerance mechanisms to silence autoimmunity in host tissues causes the variability of APECED phenotypes. The stochastic nature of T cell receptor gene rearrangement leads to a diverse potential of peripheral autoreactive T cells in AIRE-deficient individuals (Kisand & Peterson, 2015). Peripheral tolerance serves to filter out those autoreactive T cell responses, but because the peripheral filter is imperfect, the list of specific autoreactive T cell clones left unconstrained is unpredictable (Figure 1). For example, the self-peptides available for dendritic cells to uptake and present may depend on random circumstance, leaving the peripheral tolerance of specific tissues up to chance. Furthermore, variability in the activation and recruitment of specific induced regulatory T cell clones further confounds which autoreactive helper and cytotoxic T cells will cause host tissue damage. Still other factors, such as the amount of costimulatory molecules and activation-inducing cytokines present in a given tissue, play further roles (Klein & Kyewski, 2000). Therefore, while dysfunctional PGE in the thymus is sufficient to promote a myriad of autoreactive T cells in the periphery of APECED patients, the variable phenotypes associated with the disorder result from the stochastic mechanisms of peripheral tolerance used to neutralize autoreactive activity.
Conclusions and Future Directions
Because of AIRE’s central role in facilitating PGE, lack of central tolerance is intrinsic in every APECED phenotype (Figure 1). However, current treatment options do not remedy issues with central tolerance. Instead, treatments of APECED focus on maintaining tissue function and suppressing immune system responses through anti-inflammatory drugs. These treatments are often ineffective in limiting the autoimmune responses (Kisand & Peterson, 2015). While successful constitution of central tolerance in APECED patients would cure their symptoms, the complexity of the mechanisms involved in AIRE expression, PGE, and TSA presentation poses significant obstacles to targeting central tolerance therapeutically. Therefore, treatments for APECED patients could instead utilize the suppressive mechanisms of peripheral tolerance.
Peripheral tolerance is an effective suppressor of autoimmune responses. Despite the diversity of autoreactive T cells in the periphery of APECED patients, typical patients experience autoimmune responses to only a limited number of tissues (Figure 1; Kisand & Peterson, 2015). This is because peripheral tolerance is responsible for suppressing the activation, proliferation, and activity of autoreactive T cells. As such, an autoimmune response to any self-antigen can be thought of as a failure of peripheral tolerance to protect that antigen from immune targeting.
Identifying the autoimmune responses in each APECED patient inherently identifies the limits of peripheral tolerance in that individual. Therefore, new therapeutic efforts for APECED could address breaches in peripheral tolerance in a symptom-specific manner: patients would receive therapy that would induce peripheral tolerance to the tissues under attack. This treatment might be accomplished by introducing the self-antigens of interest to secondary lymphoid organs, where immature dendritic cells may tolerize peripheral T cells specific to those antigens; a 2012 study showed that mice injected with microparticles decorated with a specific antigen induced long term tolerance of T cells specific to that antigen (Getts et al., 2012). Additionally, transplanting tissue-specific regulatory T cells into the periphery may promote anergy to a given tissue. Many regulatory T cell-based therapeutic studies are currently in clinical trials, and future studies may utilize specific MHC/peptide combinations to isolate and expand antigen-specific regulatory T cells (Khor, 2016). While these methods of inducing tissue-specific peripheral tolerance are far from developed, they provide the potential to overcome the variability associated with both the causes and symptoms of APECED.
Abramson, J., Giraud, M., Benoist, C., & Mathis, D. (2010). Aire’s Partners in the Molecular Control of Immunological Tolerance Cell, 140(1), 123–135. doi:10.1016/j.cell.2009.12.030
Anderson, M. S., & Su, M. A. (2016). AIRE expands: new roles in immune tolerance and beyond. Nature Reviews Immunology, 16(4), 247–258. doi:10.1038/nri.2016.9
De Martino, L., Capalbo, D., Improda, N., D’Elia, F., Di Mase, R., D’Assante, R., . . . Salerno, M. (2013). APECED: A Paradigm of Complex Interactions between Genetic Background and Susceptibility Factors. Frontiers in Immunology, 4. doi:10.3389/fimmu.2013.00331
Derbinski, J., Schulte, A., Kyewski, B., & Klein, L. (2001). Promiscuous gene expression in medullary thymic epithelial cells mirrors the peripheral self. Nature Immunology, 2(11), 1032.
Ferguson, B. J., Alexander, C., Rossi, S. W., Liiv, I., Rebane, A., Worth, C. L., . . .Rich, T. (2008). AIRE’s CARD Revealed, a New Structure for Central Tolerance Provokes Transcriptional Plasticity. Journal of Biological Chemistry, 283(3), 1723–1731. doi:10.1074/jbc.M707211200
Gallegos, A. M., & Bevan, M. J. (2004). Central Tolerance to Tissue-specific Antigens Mediated by Direct and Indirect Antigen Presentation. The Journal of Experimental Medicine, 200(8), 1039–1049. doi:10.1084/jem.20041457
Gallo, V., Giardino, G., Capalbo, D., Palamaro, L., Romano, R., Santamaria, F., . . . Pignata, C. (2013). Alterations of the autoimmune regulator transcription factor and failure of central tolerance: APECED as a model. Expert Review of Clinical Immunology, 9(1), 43–51. doi:10.1586/eci.12.88
Getts, D. R., Martin, A. J., McCarthy, D. P., Terry, R. L., Hunter, Z. N., Yap, W. T., . . . Miller, S. D. (2012). Microparticles bearing encephalitogenic peptides induce T-cell tolerance and ameliorate experimental autoimmune encephalomyelitis. Nature Biotechnology, 30(12), 1217–1224. doi:10.1038/nbt.2434
Giraud, M., Yoshida, H., Abramson, J., Rahl, P. B., Young, R. A., Mathis, D., & Benoist, C. (2012). Aire unleashes stalled RNA polymerase to induce ectopic gene expression in thymic epithelial cells. Proceedings of the National Academy of Sciences, 109(2), 535–540. doi:10.1073/pnas.1119351109
Hawiger, D., Inaba, K., Dorsett, Y., Guo, M., Mahnke, K., Rivera, M., . . . Nussenzweig, M. C. (2001). Dendritic Cells Induce Peripheral T Cell Unresponsiveness under Steady State Conditions in Vivo. The Journal of Experimental Medicine, 194(6), 769–780. doi:10.1084/jem.194.6.769
Hubert, F.-X., Kinkel, S. A., Davey, G. M., Phipson, B., Mueller, S. N., Liston, A., . . . Heath, W. R. (2011). Aire regulates the transfer of antigen from mTECs to dendritic cells for induction of thymic tolerance. Blood, 118(9), 2462–2472. doi:10.1182/blood-2010-06-286393
Jordan, M. S., Boesteanu, A., Reed, A. J., Petrone, A. L., Holenbeck, A. E., Lerman, M. A., . . . Caton, A. J. (2001). Thymic selection of CD4+CD25+ regulatory T cells induced by an agonist self-peptide. Nature Immunology, 2(4), 301.
Keane, P., Ceredig, R., & Seoighe, C. (2015). Promiscuous mRNA splicing under the control of AIRE in medullary thymic epithelial cells. Bioinformatics, 31(7), 986–990. doi:10.1093/bioinformatics/btu785
Kekäläinen, E., Tuovinen, H., Joensuu, J., Gylling, M., Franssila, R., Pöntynen, N., . . . Arstila, T. P. (2007). A Defect of Regulatory T Cells in Patients with Autoimmune Polyendocrinopathy-Candidiasis-Ectodermal Dystrophy. The Journal of Immunology, 178(2), 1208–1215. doi:10.4049/jimmunol.178.2.1208
Khor, B. (2016). Regulatory T cells: Central concepts from ontogeny to therapy. Transfusion Medicine Reviews, 0(0). doi:10.1016/j.tmrv.2016.07.003
Kisand, K., & Peterson, P. (2015). Autoimmune Polyendocrinopathy Candidiasis Ectodermal Dystrophy. Journal of Clinical Immunology, 35(5), 463–478. doi:10.1007/s10875-015-0176-y
Klein, L., & Kyewski, B. (2000). Promiscuous expression of tissue antigens in the thymus: a key to T-cell tolerance and autoimmunity? Journal of Molecular Medicine, 78(9), 483–494.
Koh, A. S., Kingston, R. E., Benoist, C., & Mathis, D. (2010). Global relevance of Aire binding to hypomethylated lysine-4 of histone-3. Proceedings of the National Academy of Sciences, 107(29), 13016–13021. doi:10.1073/pnas.1004436107
Kont, V., Murumägi, A., Tykocinski, L.-O., Kinkel, S. A., Webster, K. E., Kisand, K., . . . Peterson, P. (2011). DNA methylation signatures of the AIRE promoter in thymic epithelial cells, thymomas and normal tissues. Molecular Immunology, 49(3), 518–526.
Kyewski, B., & Derbinski, J. (2004). Self-representation in the thymus: an extended view. Nature Reviews Immunology, 4(9), 688–698. doi:10.1038/nri1436
Laan, M., & Peterson, P. (2013). The Many Faces of Aire in Central Tolerance. Frontiers in Immunology, 4. doi:10.3389/fimmu.2013.00326
Marrack, P., McCormack, J., & Kappler, J. (1989). Presentation of antigen, foreign major histocompatibility complex proteins and self by thymus cortical epithelium. Nature, 338(6215), 503–505. doi:10.1038/338503a0
Metzger, T. C., & Anderson, M. S. (2011). Control of central and peripheral tolerance by Aire. Immunological Reviews, 241(1), 89–103. doi:10.1111/j.1600-065X.2011.01008.x
Mueller, D. L. (2010). Mechanisms maintaining peripheral tolerance. Nature Immunology, 11(1), 21–27. doi:10.1038/ni.1817
Ohkura, N., Hamaguchi, M., Morikawa, H., Sugimura, K., Tanaka, A., Ito, Y., . . . Sakaguchi, S. (2012). T Cell Receptor Stimulation-Induced Epigenetic Changes and Foxp3 Expression Are Independent and Complementary Events Required for Treg Cell Development. Immunity, 37(5), 785–799. doi:10.1016/j.immuni.2012.09.010
Perry, J. S. A., Lio, C.-W. J., Kau, A. L., Nutsch, K., Yang, Z., Gordon, J. I., . . . Hsieh, C.-S. (2014). Distinct contributions of Aire and antigen presenting cell subsets to the generation of self-tolerance in the thymus. Immunity, 41(3), 414–426. doi:10.1016/j.immuni.2014.08.007
Ramsey, C., Bukrinsky, A., & Peltonen, L. (2002). Systematic mutagenesis of the functional domains of AIRE reveals their role in intracellular targeting. Human Molecular Genetics, 11(26), 3299–3308. doi:10.1093/hmg/11.26.3299
Rossi, S. W., Kim, M.-Y., Leibbrandt, A., Parnell, S. M., Jenkinson, W. E., Glanville, S. H., . . . Anderson, G. (2007). RANK signals from CD4+3− inducer cells regulate development of Aire-expressing epithelial cells in the thymic medulla. The Journal of Experimental Medicine, 204(6), 1267–1272. doi:10.1084/jem.20062497
Sleeman, J. E., & Lamond, A. I. (1999). Newly assembled snRNPs associate with coiled bodies before speckles, suggesting a nuclear snRNP maturation pathway. Current Biology, 9(19), 1065–1074. doi:10.1016/S0960-9822(99)80475-8
Su, M. A., Giang, K., Žumer, K., Jiang, H., Oven, I., Rinn, J. L., . . . Anderson, M. S. (2008). Mechanisms of an autoimmunity syndrome in mice caused by a dominant mutation in Aire. The Journal of Clinical Investigation, 118(5), 1712–1726. doi:10.1172/JCI34523
Tykocinski, L.-O., Sinemus, A., & Kyewski, B. (2008). The Thymus Medulla Slowly Yields Its Secrets. Annals of the New York Academy of Sciences, 1143(1), 105–122. doi:10.1196/annals.1443.018
Tykocinski, L.-O., Sinemus, A., Rezavandy, E., Weiland, Y., Baddeley, D., Cremer, C., . . . Kyewski, B. (2010). Epigenetic regulation of promiscuous gene expression in thymic medullary epithelial cells. Proceedings of the National Academy of Sciences, 107(45), 19426–19431. doi:10.1073/pnas.1009265107
Ucar, O., & Rattay, K. (2015). Promiscuous Gene Expression in the Thymus: A Matter of Epigenetics, miRNA, and More? Frontiers in Immunology, 6. doi:10.3389/fimmu.2015.00093
Ucar, O., Tykocinski, L.-O., Dooley, J., Liston, A., & Kyewski, B. (2013). An evolutionarily conserved mutual interdependence between Aire and microRNAs in promiscuous gene expression. European Journal of Immunology, 43(7), 1769–1778. doi:10.1002/eji.201343343
Waterfield, M., Khan, I. S., Cortez, J. T., Fan, U., Metzger, T., Greer, A., . . . Anderson, M. S. (2014). The transcriptional regulator Aire coopts the repressive ATF7ip-MBD1 complex for the induction of immunotolerance. Nature Immunology, 15(3), 258–265. doi:10.1038/ni.2820
Yano, M., Kuroda, N., Han, H., Meguro-Horike, M., Nishikawa, Y., Kiyonari, H., . . . Matsumoto, M. (2008). Aire controls the differentiation program of thymic epithelial cells in the medulla for the establishment of self-tolerance. The Journal of Experimental Medicine, 205(12), 2827–2838. doi:10.1084/jem.20080046
Zhang, M., Vacchio, M. S., Vistica, B. P., Lesage, S., Egwuagu, C. E., Yu, C.-R., . . . Gery, I. (2003). T Cell Tolerance to a Neo-Self Antigen Expressed by Thymic Epithelial Cells: The Soluble Form is More Effective Than the Membrane-Bound Form. The Journal of Immunology, 170(8), 3954-3962. doi:10.4049/jimmunol.170.8.3954
Žumer, K., Plemenitaš, A., Saksela, K., & Peterlin, B. M. (2011). Patient mutation in AIRE disrupts P-TEFb binding and target gene transcription. Nucleic Acids Research, 39(18), 7908–7919. doi:10.1093/nar/gkr527
Žumer, K., Saksela, K., & Peterlin, B. M. (2013). The Mechanism of Tissue-Restricted Antigen Gene Expression by AIRE. The Journal of Immunology, 190(6), 2479–2482. doi:10.4049/jimmunol.1203210
Exploring Carry-Over Effects to Elucidate Attention Bias Modification’s Mixed Results
Attention bias modification (ABM) has been shown to decrease self-reported symptom severity for those with social anxiety disorder (SAD). ABM may also decrease attention bias towards threat present in SAD. Currently, the most prominent form of ABM is a modified dot-probe paradigm that uses two affective/emotional faces to measure or train attention bias. Results are mixed in previous studies regarding the ability of ABM to alter attention bias. Carry-over effects from trial to trial may help determine the origin of these variations, as the presence of carry-over effects may alter the component of attention measured in each dot-probe trial. In this study, 85 subjects were randomly assigned to three ABM training conditions: attend positive, attend threat, and a control training. Before and after ABM, attention bias scores were measured. No carry-over effects for any group or ABM condition were found. In addition, attention bias scores did not differ between groups or training conditions at initial assessment and the ABM training did not alter attention bias scores. Although our findings do not support the role of carry-over effects, it is possible that carry-over effects may help elucidate the role of attention bias both in the dot-probe paradigm and in SAD. Thus, carry-over effects should be considered in dot-probe data analysis.
Social Anxiety Disorder (SAD) is characterized by excessive worry and self-consciousness surrounding social situations due to significant discomfort, negative internal evaluations, or anticipated embarrassment (American Psychological Association, 2013). SAD is a debilitating disorder which leads to significant impairment in work and social situations. Annually, about 7% of the population meets DSM-V criteria for SAD and it accounts for 10% to 20% of people with anxiety disorders in outpatient settings (American Psychological Association, 2013; Ruscio et al., 2008). Not surprisingly, treatment for this disorder is in high demand.
In an effort to reduce anxiety symptoms, attention bias modification (ABM) has been explored as a potential computer-based treatment for SAD designed to alter how people process and pay attention to social information (Amir et al., 2008; Heeren, Lievens, & Philippot, 2011; Klumpp & Amir, 2010). ABM is a modified dot-probe paradigm using affective/emotional stimuli such as facial expressions or words. During the modified dot-probe task, two stimuli appear on the screen for less than 1000 milliseconds, after which a target (letter or symbol) replaces one of the stimuli. The participant is asked to respond to the target as quickly and accurately as possible. The trial is named for the type of stimulus being replaced by the target (i.e., if a threat stimulus is replaced, the trial is a threat trial).
In the literature, three ABM conditions have been tested: attend positive (positive stimulus is more often replaced by a target), attend threat (negative stimulus is more often replaced by a target), and a no-training control (stimulus types are replaced by a target with equal probability; Frewen, Dozois, Joanisse, & Neufeld, 2008; Klumpp & Amir, 2010). An attention bias, the tendency to attend to a certain type of information over other types of information, can be determined by calculating the difference in reaction times between threat trials and positive trials (Amir et al., 2009; Izetelny, 2006). This calculation is most commonly carried out using data from an assessment task, which is often a dot-probe task similar to the control training (Boettcher et al., 2013; White, Suway, Pine, Bar-Haim, & Fox, 2011).
Individuals with SAD often demonstrate an attention bias to threat in these tasks (Bogels & Mansell, 2004). ABM has been developed in an effort to reduce this attention bias. In the dot-probe paradigm, attention bias manifests itself in reaction times. For example, if a SAD subject has an attention bias towards threat, the subject is, on average, quicker to respond to threat trials than to positive trials. This occurs because the subject’s attention is immediately drawn to the threatening stimuli. When the stimulus is replaced with an arrow, the subject will then respond more quickly.
However, previous findings on the subject are mixed, with some studies reporting no bias among SAD subjects (Boettcher et al., 2013; Bradley et al., 1997) or even a bias away from threat (Pittig, Pawlikowski, Craske, & Alpers, 2014; Harrison & Turpin, 2003). This complicates the literature on ABM to date, as individual attention bias may interact with ABM tasks to produce inconsistent results.
A question in the literature that has not yet been adequately addressed is how ABM modifies attention bias. Studies have shown that it is possible to modify attention bias towards the direction of the training (Li, Tan, Qian, & Liu, 2008; Amir et al., 2009; White et al., 2011). In contrast, Heeren et al., (2011) and Boettcher et al., (2013) report that neither type of ABM, either towards or away from threat, was able to modify attention bias. With all of these conflicting studies, it is difficult to identify a clear mechanism underlying ABM effects.
One way to further explore the underlying mechanism of ABM is to investigate carry-over effects. Carry-over effect is the tendency of one stimulus to have an influence over the evaluation of the next stimulus (Nonyane & Theobald, 2007). Two important carry-over effects are assimilation and contrast. Assimilation occurs when the previous stimulus is congruent with the current stimulus (e.g., a threat trial follows another threat trial). Contrast occurs when the trials are incongruent (e.g., a threat trial follows a positive trial).
The majority of carry-over effects are studied in the emotional Stroop task due to its multiple formats (Holle, Neely, & Heimberg, 1997; Waters, Sayette, Franken, & Schwartz, 2005). Similar to the modified dot-probe task, the emotional Stroop task measures attention through reaction time. One type of Stroop task, blocked, involves presenting the same type of stimuli in a series or block and represents assimilation carry-over effects. The blocked emotional Stroop task has been linked to attention maintenance (maintaining focused attention) in SAD and healthy controls through assimilation effects (Holle et al., 1997; Jones-Chester, Monsell, & Cooper, 1998; Waters and Feyerabend, 2000; Waters et al., 2005). The second type of Stroop task, mixed, involves presenting different stimulus types in a random order and represents contrast carry-over effects. The mixed emotional Stroop task has been linked to orienting of attention (i.e., initially focusing one’s attention) in SAD and healthy controls through contrast effects (Holle et al., 1997; Jones-Chester et al., 1998; Waters and Feyerabend, 2000; Waters et al., 2005). This task may generalize to social experiences, in that a blocked format Stroop task represents a cue-rich environment such as a party, while a randomized format Stroop task represents a mix of social and non-social cues more similar to a single interpersonal interaction (Waters et al., 2005).
Although the dot-probe task uses a mixed format, assimilation and contrast effects may be measured. This is possible because carry-over effects may be simplified to a single previous trial rather than multiple previous trials (Aguirre, 2007). Thus, assimilation effects in the ABM paradigm occur on congruent trials, where the stimulus replaced by the target is the same emotion as the stimulus replaced during the previous trial. Contrast effects occur during incongruent trials, where the stimulus replaced by the target is different than that replaced in the previous trial. These effects have not previously been studied in the ABM paradigm. In addition, behavioral biases from carry-over effects are shown to be sensitive to task context, so a generalization between task and stimulus may not always be made (Aguirre, 2007). Accordingly carry over effects must be studied not only in the Stroop task but in ABM as well.
The purpose of this study was to investigate why ABM alters attention bias in some cases but not in others. This research will help determine the role of attention bias in the dot-probe paradigm and ABM training. It is possible that the modified dot-probe can be linked to the two components of attention in the same way as the emotional Stroop task, since both tasks contain the same carry-over effects. Klumpp and Amir (2010) and O’Toole and Dennis (2012) were among the first to introduce and test the idea that attention switching/disengaging from threat is the effective component of ABM. This same disengagement from threat stimuli in those with SAD is key to good performance, faster and more accurate responses, on the Stroop task as well (Cisler, Bacon, & Williams, 2009). Although blocked and mixed formats only exist in the Stroop task, dot-probe tasks simulate these with congruent and incongruent trials explained above. Therefore, we examined carry-over effects during dot-probe assessments, before and after ABM, to determine whether carry-over effects were linked to attention bias scores. If carry over effects are linked to attention bias scores the proceeding stimulus will affect the current stimulus and reaction times will differ for congruent or incongruent trial sequences ending in the same trial type (threat or positive).
Participants (N=85) were recruited using flyers and online advertisements posted at a large university medical center and the surrounding community. Members of the surrounding community and patients seeking treatment at the university-affiliated outpatient anxiety clinic responded to the advertisements to express their interest in participating. The study protocol was approved by the Institutional Review Board at the University of Michigan Medical School. Participants with SAD and healthy controls (HC) were recruited through telephone and in-person diagnostic screenings with the Mini International Neuropsychiatric Interview (MINI; Sheehan et al., 1998), the Liebowitz Social Anxiety Scale (LSAS; Liebowitz, 1987), the Social Interaction Anxiety Scale (SIAS; Brown, Turvosky, Heimburg, Juster, Brown, & Barlow, 1997), and the Beck Depression Inventory (BDI-II; Beck, Steer, Ball, & Ranieri 1996). A licensed clinical psychologist performed all diagnostic assessments. All SAD participants had a current and primary SAD diagnosis based on the diagnostic assessment. HC were free of current or past diagnosis of Axis I or II disorders. All participants were at least 18 years of age and provided written informed consent.
Participants with the following were excluded from the study: a) clinically significant medical or neurologic condition; b) primary psychiatric disorder other than SAD; c) life history of schizophrenia or bipolar disorder; d) current major depressive disorder; e) alcohol/drug abuse or dependence in the past year; f) current suicidal ideation; g) presence of an organic mental syndrome, mental retardation, or pervasive developmental disorder; h) unwilling/unable to sign the informed consent document; and i) current psychiatric medication use other than selective serotonin reuptake inhibitor (SSRI). SSRI medication at a stable dose for three months was permitted in the SAD group. Medications for minor conditions that would not directly affect measures related to the study (birth control, supplements, etc.) were also permitted. Beta blockers, stimulants, opioids, and other medications with known effects on sympathetic/parasympathetic functioning and/or attentional processing were not permitted. Current unipolar major depression or drug/alcohol abuse/dependence was an exclusion criterion for this study, but a past history of these conditions was not. Since these conditions are frequently present in SAD participants, exclusion of such participants would yield a biased and unrepresentative sample of the SAD population.
After written informed consent was obtained, each participant was randomly assigned to one of three conditions (attend positive, attend threat, or control). Participants completed all tasks and self-report measures in the lab with researcher supervision over two visits. During the first visit, participants were screened for participation criteria and completed all self-report paper measures (LSAS, SIAS, BDI). During the second visit, participants completed three dot-probe tasks, including pre-assessment, ABM training, and post-assessment. Although not included in these analyses, self-report paper measures were repeated following ABM training.
Attention Bias Assessment
The attention bias assessment task occurred before and after ABM to assess attention bias. The assessment task was a modified dot-probe task consisting of 94 trials. Participants were presented with a fixation cross for 500ms followed by two affective faces, one positive (happy) and one negative (angry), for 500ms. After the faces disappeared, one face was replaced by an arrow. Participants were instructed to respond to the direction of the arrow (up or down), by pressing the arrow keys on a standard keyboard, as “quickly and accurately as possible”. The next trial began after a response was made. During this task, the positive and negative faces were replaced by the arrow with equal probability (50%). Reaction time and accuracy, used to determine the location of the participant’s attention, were recorded on each trial. Incorrect trials and trials where the reaction time was greater than three standard deviations from the mean of each participant were excluded.
Attention Bias Modification (ABM)
Prior to training, the participants were told researchers were testing a computer-based attention training program that could alter the way people process and pay attention to social information, which may help reduce SAD symptoms. ABM followed the same format as the assessment task. The only differences between the assessment and ABM was the face replacement ratio and the length. We employed three ABM training conditions: attend positive (arrow replaced the positive face 80% of the time), attend threat (arrow replaced the negative face 80% of the time), and control (both face types were replaced with equal probability). Each training session consisted of 744 trials. Note that the control ABM training was identical to the assessment task, excluding the number of trials, in order to control the impact of viewing faces and engaging in the task without manipulating attentional direction. All faces used in both ABM and the assessments were from the Pictures of Facial Affect (Ekman & Friesen, 1976), a widely used standardized face set.
All analyses were conducted with International Business Machines Corporation’s (IBM) Statistical Package for the Social Sciences (SPSS v. 21). The threshold for significance was set at an alpha level of 0.05. Attention bias was calculated by subtracting the average reaction time on threat trials (arrow replaced the threat face) from the average reaction time on positive trials (arrow replaced the positive face). To examine carry-over effects, each trial was categorized based on not only the current trial but also the trial that preceded it. As seen in Figure 1, threat to positive occurred when the current trial is a positive trial and was preceded by a threat trial.
A 2 (time; pre-ABM, post-ABM) X 4 (trial type; positive to positive, threat to threat, positive to threat, threat to positive) X 2 (group; HC, SAD) X 3 (ABM condition; toward threat, toward positive, control) mixed ANOVA with time and trial type as within-subjects variables and group and condition as between-subjects variables was run with reaction time as the dependent variable. This analysis examined how the congruence of the arrow replacement on the previous trial affected reaction time on the next trial for different face types in HC and SAD.
To examine initial differences in HC and SAD groups, a t-test comparing mean pre-ABM bias scores in SAD versus HC was conducted. Finally, to determine if attention bias changed as a function of group or condition, we conducted a 2 (time; pre-ABM, post-ABM) X 2 (group; HC, SAD) X 3 (ABM condition; attend threat, attend positive, control) mixed ANOVA with time as a within-subject variable, group and condition as between-subject variables, and attention bias score as the dependent variable.
Participants were similar across condition and group in age [F(2,79) = 0.859, p = .428] and gender [F(2,78) = 0.205, p = .815]. Participants were also similar within group across ABM conditions on all self-report measures: SIAS [F(2,79) = 0.282, p = .755], LSAS [F(2,79) = 2.105, p = .129], and BDI [F(2,79) = 0.142, p = .868]. There were significant differences between the HC and SAD groups on SIAS [F(1,79) = 356.4, p < .001], LSAS [F(1,79) = 365.5, p < .001], and BDI [F(1,79) = 49.76, p < .001], with SAD subjects reporting significantly more symptoms on all measures than HC . Demographics and self-report measures are summarized in Table 1.
Attention Biases to Affective Faces
Mean biases and standard deviations for all groups and conditions are shown in Table 2. The average pre-ABM attention bias was 0.856ms (standard deviation: 32.7ms) for HC and -6.05ms (standard deviation: 32.4ms) for SAD. A negative bias indicates a bias towards threat, with a score of zero indicating no bias in either direction. One-sample t-tests confirmed that SAD and HC subjects’ bias scores did not differ significantly from zero, suggesting no attention bias prior to training in either group (p > .05). Attention bias did not change as a function of group or condition: no main effects or interactions were significant (p > .05). A one-sample t-test confirmed that SAD and HC subjects’ bias scores did not differ significantly from zero after ABM, suggesting no attentional bias prior to or after ABM, regardless of condition (p > .05).
The Effect of Carry-Over on Reaction Times
There was a significant effect of time [F(1,77) = 33.41, p < .001] but not of trial type [F(3,75) = .378, p = .798] on carry-over effects between groups and ABM conditions over time. There was also no interaction between time, trial type, and either group [F(3,75) = 0.876, p = .457] or condition [F(6,150) = 1.058, p = .391]. This demonstrates that participants’ responses became faster throughout the training independent of trial type, group, or condition. No other main effects or interactions were significant (p > .05).
The purpose of this study was to determine why ABM alters attention bias in some cases but not in others, with specific focus on carry-over effects, in an effort to better understand inconsistencies in the ABM literature. To our knowledge, this is the first study examining carry-over effects in a modified dot-probe paradigm. Our findings did not replicate previous reports that ABM alters attention bias (Li et al., 2008; Amir et al., 2009; White et al., 2011). However, there are numerous possible interpretations of these findings, including: 1) congruent and incongruent trials are measuring different components of attention; 2) two types of SAD biases exist, which impact response to training; and 3) trial order randomization may have impacted carry-over effects. Because of these possibilities, future studies should consider analyzing congruent and incongruent trials separately, as well as matching individual participants to different training protocols based on pre-training bias scores.
The majority of previous literature reports that participants with SAD have an attention bias towards threat, compared to positive faces, and HC show little to no attention bias either towards or away from threat faces (Bogels & Mansell, 2004). This study did not support these findings. The results of this study are consistent with the findings of Boettcher et al. (2013), who also report that SAD subjects’ bias scores did not differ significantly from zero and that neither type of ABM, either towards or away from threat, was able to modify attention bias. Despite the lack of significant ABM effects, we did obtain the expected effect of time, such that participants’ responses became faster from pre- to post- training. If carry-over effects were detected, we would have expected to see differences in the average reaction time for trial types ending in the same trial (e.g., threat to positive and positive to positive would differ). Since these differences were not present in our data, carry-over effects were not detected, indicating that carry-over effects were not present in this dot-probe paradigm. However, there may have been other contributing or conflicting factors.
One possible factor influencing our findings is that congruent and incongruent trials measure different components of attention. Carry-over effects in the emotional Stroop task allude to these two components of attention: orienting (to initially focus one’s attention) and maintenance (to maintain that focus; Waters et al., 2005). Similar to Stroop, it is possible that in the dot-probe task congruent trials measure maintenance of attention, while incongruent trials measure orienting attention. Dot-probe task data from subjects who display a bias in one direction should be analyzed in order to determine if different components of attention are being measured. This would also help determine how an initial attention bias in SAD affects the outcome of ABM and how initial bias can be utilized to improve current treatment. Future studies should aim to better understand how different components of attention impact performance on the dot-probe task.
A bias must exist in order to use carry-over effects to elucidate whether dot-probe measures two components or one component of attention. Because subjects in this study did not initially show a bias, their bias was not shifted as a result of ABM (O’Toole & Dennis, 2012). Thus, we were not able to further explore relationships between the different components of attention and trial types on the dot-probe task.
It is also possible that two types of SAD biases exist. There is some evidence in the literature that SAD subjects can be either “attenders” or “avoiders” of threat (O’Toole & Dennis, 2012). This means that some subjects with SAD have a bias toward threat while others have a bias away from threat. Most studies (Amir et al., 2008; White et al., 2011; Heeren et al., 2011) assume that all SAD subjects have the same bias. This would explain why some studies show a bias while others do not, resulting in significant variability within groups and making it difficult to detect effects of ABM. In our study, we did not have enough participants to properly investigate this theory, but it is possible that individual differences in attention bias interact with training type, resulting in certain people benefitting more from specific types of training. In order to optimize training effects, it is important to consider interactions between initial bias and training condition. Heeren et al. (2011) and Klumpp and Amir (2010) state that the process of disengaging attention may be the mechanism underlying ABM’s effectiveness. Therefore, to facilitate the most disengagement, it may be important to match initial bias with training condition. An attender (someone with a bias towards threat) may benefit most from the attention towards positive condition which requires them to disengage from the threat face, whereas an avoider (someone with a bias away from threat) may benefit most from training towards threat. Separating these two groups and training them differently may be the key to providing the ideal treatment for SAD.
The absence of carry-over effects could also be explained by the trial randomization procedure we used in the dot-probe paradigm, which was intended to eliminate carry-over effects.
When interpreting these findings, there are a number of limitations to consider. First, this was a pilot study aiming to investigate effects of ABM on SAD symptoms. Therefore, the null findings may be due to lack of power. Statistical power analysis suggests that our sample size of approximately 14 participants per cell yields low to moderate power (65%) to detect differences between groups and ABM conditions. Another limitation stems from the intervention length. Although participants spent about 40 minutes completing 750 ABM trials, only a single session was completed. To properly simulate a future treatment, participants should engage in multiple sessions over a longer period of time. It is possible that additional sessions are necessary in order to produce alterations in attention bias (Li et al., 2008). Finally, a possible motivation limitation exists. Patients receiving a treatment, compared to paid participants, may be more motivated and vigilant during training and thus should produce more robust results. In order to ameliorate some of these limitations, future studies should aim to further investigate the role carry-over effects play in ABM in larger samples with additional assessment tools.
SAD is a debilitating disorder which leads to significant impairment in work and social situations. As a result of ongoing research in this field, computer-based ABM is a possible alternative to current SAD treatments. Although further research is needed to make this type of ABM effective, a great deal is being learned about attention and SAD through this paradigm. Ample evidence suggest that attention bias plays an integral role in SAD and its treatment (Amir et al., 2009; Frewen, Dozois, Joanisse, & Neufeld, 2008). Consequently, we can begin manipulating treatment to best serve patients. In an effort to further refine computer-based ABM protocols, this study examined carry-over effects as a possible explanation for inconsistent findings in the existing literature. Based on the findings of this and other studies, future studies should consider analyzing congruent and incongruent trials separately as well as grouping participants based on pre-training attention bias. Since ABM is more cost effective and easier to distribute than any current therapy for SAD, it has the potential to relieve many SAD patients of their symptoms.
All of the data collection and analysis took place in Dr. Israel Liberzon’s Human Neuroimaging Lab at the University of Michigan. I would like to thank Elizabeth Duval, Ph.D. for allowing me to work on her study, use the results in my analysis, and for reviewing and editing this paper. Her time and effort in teaching me about ABM and how to analyze and discuss the data is greatly appreciated.
I would also like to thank those who assisted in this research and subsequent paper: Daniel Sheridan Ph.D., RN, FAAN for reviewing and editing this paper; Sonalee Joshi for reviewing and editing this paper, data collection, and scoring; and Kelsey Krus for data collection and scoring. This research and my time at the University of Michigan Psychiatry Department would not be possible without contributions from the Undergraduate Research Opportunities Program (UROP) and Biomedical & Life Sciences Summer Fellowship donors.
Aguirre, G. K. (2007). Continuous carry-over designs for fMRI. NeuroImage, 35(4), 1480–1494. doi:10.1016/j.neuroimage.2007.02.005
Amir, N., Weber, G., Beard, C., Bomyea, J., & Taylor, C. T. (2008). The effect of a single-session attention modification program on response to a public-speaking challenge in socially anxious individuals. Journal of Abnormal Psychology, 117(4), 860-868. doi:10.1037/a0013445
Amir, N., Beard, C., Taylor, C. T., Klumpp, H., Elias, J., Burns, M., & Chen, X. (2009). Attention training in individuals with generalized social phobia: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 77(5), 961–973. doi:10.1037/a0016685
American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders, (5th ed.). Washington, DC: American Psychiatric Association.
Beck, A.T., Steer, R.A., Ball, R., & Ranieri, W.F. (1996). Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. Journal of Personality Assessment, 67(3) 588-597. doi:10.1207/s15327752jpa6703_13
Boettcher, J., Leek, L., Matson, L., Holmes, E. A., Browning, M., MacLeod, C., … Carlbring, P. (2013). Internet-based attention bias modification for social anxiety: A randomised controlled comparison of training towards negative and training towards positive cues. PLoS ONE, 8(9). doi:10.1371/journal.pone.0071760
Bogels, S. M., & Mansell, W. (2004). Attention processes in the maintenance and treatment of social phobia: Hypervigilance, avoidance and self focused attention. Clinical Psychology Review, 24, 827-856. doi:10.1016/j.cpr.2004.06.005
Bradley, B. P., Mogg, K., Millar, N., Bonham-Carter, C., Fergusson, E., Jenkins, J., & Parr, M. (1997). Attentional biases for emotional faces. Cognition & Emotion, 11(1), 25–42. doi:10.1080/026999397380014
Brown, E. J., Turvosky, J., Heimburg, R. G., Juster, H. R., Brown, T. A., & Barlow, D. H. (1997). Validation of the social interaction anxiety scale and the social phobia scale across anxiety disorders. Psychological Assessment, 9(1), 21-27. doi:10.1037/1040-35126.96.36.199
Carlbring, P., Apelstrand, M., Sehlin, H., Amir, N., Rousseau, A., Hofmann, S. G., & Andersson, G. (2012). Internet-delivered attention bias modification training in individuals with social anxiety disorder – a double blind randomized controlled trial. BMC Psychiatry, 12(1), 66. doi:10.1186/1471-244X-12-66
Cisler, J. M., Bacon, A. K., & Williams, N. L. (2009). Phenomenological characteristics of attentional biases towards threat: A critical review. Cognitive Therapy and Research, 33(2), 221–234. doi:10.1007/s10608-007-9161-y
Ekman, P., & Friesen, W. V. (1976). Pictures of Facial Affect. Consulting Psychologists Press, Palo Alto, CA.
Frewen, P. A., Dozois, D. J. A., Joanisse, M. F., & Neufeld, R. W. J. (2008). Selective attention to threat versus reward: Meta-analysis and neural-network modeling of the dot-probe task. Clinical Psychology Review 28, 307–337. doi:10.1016/j.cpr.2007.05.006
Harrison, L. K., & Turpin, G. (2003). Implicit memory bias and trait anxiety: A psychophysiological analysis. Biological Psychology, 62(2), 97–114. doi:10.1016/S0301-0511(02)00129-1.
Heeren, A., Lievens, L., & Philippot, P. (2011). How does attention training work in social phobia: Disengagement from threat or re-engagement to non-threat? Journal of Anxiety Disorders, 25(8), 1108–1115. doi:10.1016/j.janxdis.2011.08.001
Holle, C., Neely, J., & Heimberg, R. (1997). Blocked versus random presentation and semantic relatedness of stimulus words on response. Cognitive Therapy and Research, 21(6), 681–697. doi:10.1023/A:1021860324879
Izetelny, Adam. (2006). Attentional biases for negative and positive attachment words associated with individual differences in adult attachment and defensiveness: Effects of stimulus content, intensity, and valence. Carbondale, IL: ProQuest.
Jones-Chester, M. H., Monsell, S., & Cooper, P. J. (1998). The disordered-salient Stroop effect as a measure of psychopathology in eating disorders. International Journal of Eating Disorders, 24, 65-82. doi:10.1002/(SICI)1098-108X(199807)24:1<65::AID-EAT6>3.0.CO;2-Z
Klumpp, H., & Amir, N. (2010). Preliminary study of attention training to threat and neutral faces on anxious reactivity to a social stressor in social anxiety. Cognitive Therapy and Research, 34(3), 263–271. doi:10.1007/s10608-009-9251-0
Li, S., Tan, J., Qian, M., & Liu, X. (2008). Continual training of attentional bias in social anxiety. Behaviour Research and Therapy, 46(8), 905–912. doi:10.1016/j.brat.2008.04.005
Liebowitz, M. R. (1987). Social phobia. Modern Problems in Pharmacopsychiatry, 22, 141-173.
Nonyane, B. A., & Theobald, C. M. (2007). Design sequences for sensory studies: achieving balance for carry-over and position effects. The British Journal of Mathematical and Statistical Psychology, 60(Pt 2), 339–349. doi:10.1348/000711006X114568
O’Toole, L., & Dennis, T. A. (2012). Attention training and the threat bias: An ERP study. Brain and Cognition, 78(1), 63–73. doi:10.1016/j.bandc.2011.10.007
Pittig, A., Pawlikowski, M., Craske, M. G., & Alpers, G. W. (2014). Avoidant decision making in social anxiety: The interaction of angry faces and emotional responses. Frontiers in Psychology, 5(September), 1050. doi:10.3389/fpsyg.2014.01050
Ruscio, A. M., Brown, T. A., Chiu, W. T., Sareen, J., Stein, M. B., & Kessler, R. C. (2008). Social fears and social phobia in the USA: Results from the National Comorbidity Survey Replication. Psychological Medicine, 38, 15-28. doi:10.1017/S0033291707001699
Sheehan, D., Lecrubier, Y., Harnett-Sheehan, K., Amoriam, P., Janavs, J., Weiller, E., et al. (1998). The mini international neuropsychiatric interview (MINI): The development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. The Journal of Clinical Psychiatry, 59 Suppl 20, 22-33;quiz 34-57.
Waters, A. J., & Feyerabend, C. (2000). Determinants and effects of attentional bias in smokers. Psychology Of Addictive Behaviors, 14(2), 111-120. doi:10.1037/0893-164X.14.2.111
Waters, A. J., Sayette, M. a., Franken, I. H. a, & Schwartz, J. E. (2005). Generalizability of carry-over effects in the emotional Stroop task. Behaviour Research and Therapy, 43(6), 715–732. doi:10.1016/j.brat.2004.06.003
White, L. K., Suway, J. G., Pine, D. S., Bar-Haim, Y., & Fox, N. a. (2011). Cascading effects: The influence of attention bias to threat on the interpretation of ambiguous information. Behaviour Research and Therapy, 49(4), 244–251. doi:10.1016/j.brat.2011.01.004
Mutational Analysis of Transcriptional Initiation in Bacteria
Gene expression in bacteria begins with transcription, the process by which the base sequence of DNA is copied into RNA. Promoters contain the DNA structural and chemical information for the initiation of transcription by RNA polymerase. Many bacterial promoters have conserved DNA sequences for this purpose called the -35 region and the -10 region. We used a new system for promoter research called pClone Red to carry out mutational analysis of the -35 region of a high strength promoter. We constructed a library of 4,096 mutant promoters by varying the six bases in the -35 region of the Ptac promoter. We isolated and sequenced 81 functional promoters from the library, and measured the strength of transcription initiation for each of them using a red fluorescent protein reporter gene. We developed a weighted consensus method to determine a pClone -35 consensus sequence formula for the 81 promoters. Consensus sequence variants functioned with a strength similar to that of the published natural consensus for the -35 region and functioned in two other promoters with different sequence contexts. Our results contribute to a basic research understanding of transcriptional initiation in bacteria, and our set of 81 promoters with varying strengths provide new tools for applied research in synthetic biology and biotechnology.
Gene expression is the process by which gene information is used to direct the function of cells. It is regulated in all cells because not all genes are required all the time or under all circumstances. For example, human brain cells need to express certain genes that are not needed in muscle cells, and vice versa (Gurdon & Melton, 2008). In a similar sense, bacteria must express different genes depending on temperature, pH, osmotic pressure, or the availability of food (Beales, 2004). Knowledge of gene regulation is important for understanding the differentiation and function of eukaryotic cells, the development of tissues in multicellular organisms, and the relationships of bacteria to their environments throughout the biosphere. It helps us to understand genetic diseases, diseases impacted by genetics, and cellular disorders such as cancer (López-Bigas & Ouzounis, 2004). Gene regulation research can also be used to explore the contribution of bacterial microbiomes to human health and disease (Cho & Blaser, 2012). In the context of synthetic biology, research results on the regulation of gene expression are being applied to engineer bacterial cells that produce pharmaceuticals, attack cancer cells, neutralize environmental pollutants, and synthesize biofuels (Khalil & Collins, 2010).
Gene expression begins with transcription, the process by which DNA information, in the form of the base sequence of a gene, is converted into RNA base sequence information. For genes that encode proteins, the RNA product of transcription is used during translation to encode the sequence of amino acids in a protein. As the first step in gene expression, transcription is an important control point for gene regulation. Initiation of transcription in bacteria involves binding of an enzyme called RNA polymerase to a sequence of DNA called a transcriptional promoter. As illustrated in Figure 1, a common form of bacterial promoters includes two conserved sequence elements, a -35 region that is recognized during transcriptional initiation by RNA polymerase and its associated Sigma factor, and a -10 region that is involved in DNA melting (Ross, Aiyar, Salomon, & Gourse, 1998). The consensus sequence for the -35 region of E. coli promoters has been widely reported to be TTGACA (Harley & Reynolds, 1987; Lisser & Margalit, 1993). The consensus sequence of the -10 region is TATAAT (Waterman, Arratia, & Galas, 1984). The RNA polymerase attaches itself to one of the two DNA strands referred to as the template strand and begins to use it to make RNA. The RNA polymerase proceeds to slide along the template strand for the entire length of the gene, reading it in a 3’ to 5’ direction. Transcription ceases when the RNA polymerase encounters a transcriptional terminator. In bacteria, the resulting RNA transcript is used for translation as soon as it is available.
Mutational analysis of the -35 region of naturally occurring promoters showed that the -35 region is involved in the initial binding of the RNA polymerase to the promoter, and that it is an important contributor to the overall strength of a bacterial promoter. An in vitro study of the effect of mutations in the -35 region on the binding affinity of RNA polymerase and the rate of open promoter complex formation demonstrated that the consensus sequence of TTGACA exhibited the highest values for both parameters (Kobayashi, Ishihara & Nagata, 1990). An in vivo mutational analysis revealed that the strength of a promoter was halved by deletions that replace the -35 region with unrelated sequences (Bingham, Busby, Ponnambalam & Webster, 1986).
We chose to investigate the effect of mutations in the -35 region by 1) building a library of all 4,096 variants of the six base -35 region, 2) randomly selecting promoters from the library, 3) measuring the strengths of the promoters, 4) determining the DNA sequence of the promoters, and 5) compiling a -35 region consensus sequence from the promoters. For our study, we chose the Ptac bacterial promoter, which is a high-strength hybrid promoter derived from the Ptrp and Plac promoters (DeBoer, 1983; Figure 1). The high-strength property allowed a wide range of lower efficiencies to be generated. The sequence of the Ptac promoter is shown in Figure 1. The sequence of the -35 region is identical to the -35 region consensus of TTGACA. The method by which we introduced mutations into Ptac involves a system recently developed by our research group (Campbell et al., 2014). Figure 2 illustrates the structure of our pClone Red plasmid for cloning and measuring bacterial promoters. The pClone Red plasmid has a backwards facing promoter that causes expression of Green Fluorescent Protein (GFP). A new promoter is cloned into pClone Red by Golden Gate Assembly, a new cloning method that employs DNA ligase and type IIs restriction endonucleases such as BsaI to conduct restriction digestion and ligation at the same time for seamless connection of DNA parts (Engler, Gruetzner, Marillonnet, Weber, & Werner, 2011). A new promoter cloned into pClone Red faces forward and expresses Red Fluorescent Protein (RFP). The amount of RFP expressed depends on the strength of the new promoter.
We used pClone Red to produce a library of mutant promoters by varying each of the six bases of the -35 region of the Ptac promoter. We collected 81 different clones with varying intensities of the RFP reporter to quantify the strength of transcription initiation for each one. Our results add to the understanding of transcriptional initiation in bacteria. They suggest that promoter sequence variation in the -35 region affects sigma factor binding. Our consensus sequence is an alternative testable hypothesis about the expected strengths of bacterial promoters. The 81 promoter variants have various strengths that can be used to build complicated genetic circuits that require specific ratios of gene expression. These genetic circuits have many applications in energy, medicine, bioremediation, or biotechnology.
Materials and Methods
We introduced mutations into the Ptac -35 region by designing DNA oligonucleotides (Integrated DNA Technologies, Inc., Coralville, Iowa). We used six Ns in the place of the TTGACA sequence that comprises the -35 region of the Ptac promoter. The sequence of the top strand oligonucleotide was
and the sequence of the bottom strand oligonucleotide was
For use in Golden Gate Assembly (GGA), the top strand oligonucleotide starts with a 5’ CGAC sticky-end and the bottom strand begins with a 5’ CCGC sticky-end. Prior to GGA, we annealed the top and bottom strand oligonucleotides at 5µM in the annealing reaction. We prepared a 20µL annealing reaction using 16µL of H2O, 2µL of 10X annealing buffer (1M NaCl, 100mM Tris-HCl pH 7.4), 1µL top strand promoter oligonucleotide, and 1µL bottom strand promoter oligonucleotide. We boiled the annealing reaction in 400mL of water for 4min and allowed it to cool slowly overnight in the same water bath.
The annealed oligonucleotides were diluted with water to the same concentration as the pClone Red destination vector (Campbell et al., 2014; http://parts.igem.org/Part:BBa_J119137 in the Registry of Standard Biological Parts) to provide a 1:1 molar ratio of promoter insert to pClone vector in the GGA reaction. For GGA, we use a vector concentration of 40nM (60ng/µL of pClone Red) and diluted the annealed and cooled oligonucleotides to 40nM. We diluted the 100µM oligonucleotides to 5µM (5000nM) in the annealing reaction and further diluted the annealed oligos 125-fold (5000nM/40nM) by adding 124µL of H2O to 1µL of the cooled oligonucleotides. We performed GGA reactions in a total volume of 10µL containing 6µL H2O, 1µL 10X T4 DNA Ligase Buffer (supplied by New England BioLabs, Ipswich, Massachusetts as 300mM Tris-HCl pH 7.8 at 25°C, 100mM MgCl2, 100mM DTT and 10mM ATP), 1µL of 40 nM pClone Red destination vector, 1µL of 40nM annealed promoter oligonucleotides, 0.5 µL HF (high fidelity; New England BioLabs, Ipswich, Massachusetts) BsaI (10 units), and 0.5µL T4 DNA Ligase (New England BioLabs, Ipswich, Massachusetts; 50 Weiss units). We placed the GGA reactions in a thermal cycler set for 20 cycles of 1min at 37°C followed by 1min at 16°C. The 37°C temperature favors digestion by BsaI-HF while the 16°C favors ligation by T4 DNA ligase. The GGA reactions ended with a 15min incubation at 37°C to cleave the remaining BsaI sites. We transformed the completed GGA reactions into Z-competent E. coli JM109 cells (Zymo Research, Irvine, California) by adding competent cells to the completed GGA reaction, placing the mixture on ice for 5min, and pipetting it directly onto a LB agar plate with 50µg/mL ampicillin. We incubated the plates overnight at 37°C.
We photographed pClone mutant promoter library plates to document the colors of the colonies and chose those that had a wide range of RFP expression due to different promoter strengths. We transferred 81 colonies to LB + ampicillin liquid media with a sterile toothpick for overnight growth. We made a glycerol stock for each of them using 300µL sterile glycerol and 700µL of overnight bacterial culture. We determined the strength of the promoters by quantifying the RFP output of clones with a fluorometer (Bioteck, Winooski, Vermont) set for 585nm excitation and 615nm emission. We measured cell density with absorbance at 600nm and expressed RFP fluorescence as a ratio with cell density values in the denominator. We isolated plasmid DNA from 8mL cultures of each clone grown overnight in LB with ampicillin using the miniprep kit from Zymo Research, Irvine, California. We sent miniprep aliquots for Sanger DNA sequencing to the Iowa State University DNA Facility, Ames, Iowa.
Consensus Sequence Determination and Testing
Analysis of the sequencing data led to the development of a consensus sequence, called the pClone -35 Consensus. The consensus sequence formula of GA(T/C)A(A/T)A includes the four possible sequence variants GATAAA, GATATA, GACAAA, and GACATA. To test their functions, we incorporated them into the sequence of Ptac and cloned them into pClone Red. We sought to test the effect of promoter sequences surrounding the -35 region by embedding the four consensus sequence variants into two new promoter contexts that are very different from Ptac and from each other. As a starting point for designing the two new promoters, we used a consensus sequence developed from a mutational analysis of the sequences before and after the -35 region, reported as
(5’- WTWNWNNNNAWWWNNTTCTTGACATNTTNNNNNNNWTRTGRTATAATDNNNNNRTWW -3’)
(De Mey, Lequeux, Maertens, Soetaert, & Vandamme, 2007a). Our first set of rules to specify the bases was W=A, N=G, R=G, and D=G, while avoiding runs of 4 or more Gs or Cs. This resulted in Psimp1, whose sequence is
(5’- CGACGAGGCCGAAAAGGTTCTTGACATGTTGGCGGCGATGTGGTATAATGGCGGCGTAA -3’).
The second set of rules we applied was W=T, N=C, R=A, and D=C resulting in Psimp2, with the sequence
(5’- CGACCTCCGCATTTCCTTCTTGACATCTTCCGCCGCTTATGATATAATCCGCCCATTT- 3’).
We cloned the four pClone -35 consensus sequence variants GATAAA, GATATA, GACAAA, and GACATA, along with the original Ptac consensus of TTGACA, into Psimp1 and Psimp2 to produce a collection of ten new promoter variants. We quantified the strengths of the promoter variants by using RFP expression and compared the mean values for each of them with an unpaired t-test using a threshold for statistical significance of p = .05.
Mutant Promoter Library Construction and Exploration
Our primary research goal was to explore the effects of mutations in the -35 region of a bacterial promoter. We mutated each of the six bases in this region, producing a library of 46 or 4,096 mutant promoters. Figure 3 shows a picture of the plated library of clones on a UV light box. Unsuccessful GGA resulted in colonies that expressed the GFP gene, driven by the original backwards-facing promoter of the pClone Red plasmid. Successful GGA produced all of the colonies that do not show GFP expression, but have varying levels of RFP expression. As expected, mutations in the -35 region of the Ptac promoter yielded diverse promoter strengths, causing the varying RFP brightness of E. coli colonies on the plate.
Our collection of 81 promoters with their -35 region sequences is listed in Figure 4, along with their strengths, expressed as a ratio of the measured RFP production of each clone to that of Ptac. The 81 promoters covered a wide range of strengths, from 128% of the strength of Ptac to 0.9%. The mean RFP expression ratio of the 81 clones was 19.7 +/- 28.3, and the median RFP expression ratio was 7.3. Of the 81 clones, three have a promoter strength higher than Ptac, 12 have a strength at least 50% of Ptac, and 44 have a promoter strength less than 10% of Ptac.
Consensus Sequence Determination
Our examination of the sequences of the -35 regions from the 81 promoters listed in Figure 4 revealed patterns of bases required for high promoter strength. We captured these patterns by determining a consensus sequence. Consensus sequences are usually determined by analyzing the top 5-20% of the promoters (Estrem, Gaal, Gourse, & Ross, 1998). We used this method to determine a consensus sequence from the 12 mutant promoters that had a strength of 50% or higher compared to Ptac. The resulting Preliminary pClone -35 Consensus is shown in Figure 5B. It differs considerably from the published -35 consensus shown in Figure 5A, which was derived from 263 E. coli promoters (Harley & Reynolds, 1987). Specifically, the first, second, third, and fifth bases differ between the two consensus sequences.
To use the information collected for all of the 81 promoter sequences, we developed a new method of consensus building. Our weighted consensus method allows each base in each position to make contributions to the consensus that are proportional to the strength of the promoter in which it is found. Our method assigns a score to each base in each position based on the RFP expression of the mutant promoter that contained it. For example, for mutant promoter number 65, which had a promoter strength of 128, the G in the first position received 128 points for that promoter. The T in the second position of promoter number 65 also received 128 points, as did the bases in the remaining four positions. For promoter number 50, which had a promoter strength of 118.9, the G in the first position received 118.9 points, as did the C in the second position and the other bases in their respective positions.
We totaled the points received by each of the four bases for all six positions and used the totals to produce a consensus sequence. For the first position, the total for G, T, C, and A were 1008, 512, 84, and 76, respectively. The weighted consensus method resulted in the pClone -35 Consensus shown in Figure 5C. The pClone -35 Consensus sequence is similar to the Preliminary pClone -35 Consensus sequence but differs significantly from the Published -35 Consensus sequence.
Testing the pClone Consensus Sequence
Because our synthetic pClone -35 consensus sequence differed from the natural published -35 consensus sequence, we designed experiments to test it. We used the pClone -35 Consensus presented in Figure 5C to develop the consensus formula of GA(T/C)A(A/T)A. This consensus formula has the four consensus sequence variants of GATAAA, GATATA, GACAAA, and GACATA. We incorporated these four variants into the original pTac promoter for testing. In order to test the effect of promoter sequences surrounding the -35 region, we also incorporated the four consensus sequence variants into two additional promoters designed on the basis of a structural blueprint derived from mutational analysis in E. coli of the sequences before and after the -35 region (De Mey et al., 2007a). As described in Materials and Methods, we used the consensus sequence from (De Mey et al., 2007) to design two new promoters called Psimp1 and Psimp2. Figure 6 shows the RFP results after cloning the original Ptac -35 region sequence of TTGACA and the four pClone -35 consensus sequence variants GATAAA, GATATA, GACAAA, and GACATA into Ptac, Psimp1, and Psimp2. In the context of Ptac, the four consensus sequence variants have promoter strengths relative to Ptac that range from 21.6 to 167.1. The mean strength of the four sequence variants in Ptac was 99.0 +/- 66.4. In the context of Psimp1, the four variants had a mean promoter strength of 85.8 +/- 63.9. An unpaired t-test showed that the difference between this mean and the mean for Ptac is not statistically significant (p = .785). The range of the efficiencies in Psimp1 was 28.6 to 174.6. In Psimp2, the mean promoter strength of the four variants was 20.0 +/- 8.5. This mean is lower than the mean for Ptac, but an unpaired t-test showed that the difference is not statistically significant (p = .056). The range for Psimp2 was from 10.9 to 27.6. When we consider the -35 region consensus variants cloned into each promoter as populations, an unpaired t-test fails to reveal a significant statistical difference among the three promoter contexts.
The alternative approach of considering each consensus variant in the three promoter contexts captures important differences among them. For instance, GATAAA produces a strength of 21.6 and 26.9 in Ptac and Psimp2, respectively, but a higher strength of 87.2 in Psimp1. GACAAA produces a strength of 167.1 and 174.6 in Ptac and Psimp1, respectively, but only 10.9 in Psimp2. The order of the sequence variants was different for each of the promoters. For Ptac, we found that GACAAA > GACATA > GATATA > GATAAA. It was GACAAA > GATAAA > GATATA > GACATA for Psimp1, and lastly, GACACA > GATAAA > GATATA > GACAAA for Psimp2. These results suggest that bacterial promoter strength is a function of the sequence of the -35 region and the sequence context surrounding it.
Implications for Understanding the Initiation of Bacterial Transcription
Although the core RNA polymerase with its five subunits is responsible for the production of RNA during transcription in E. coli and most bacteria, initiation of transcription requires its association with a subunit called the sigma factor (Gross et al., 1998). The primary sigma factor, sigma 70, initiates transcription of genes needed during exponential growth. The sigma factor enables binding of the RNA polymerase to the -10 region and -35 region of promoters. A conserved domain of sigma factor proteins, region 4.2, is a helix-turn-helix structural motif that contains amino acids that make contact with bases and phosphates on both strands of the -35 region (Gruber & Gross, 2003). When the nascent RNA is five to ten nucleotides long, the sigma factor dissociates from the RNA polymerase. The core RNA polymerase continues with the production of an RNA transcript in the absence of sigma factor. Initiation frequency is responsive to the promoter sequence and can vary over three orders of magnitude (Record et al., 1996). The prevailing view of how promoter strength varies in bacterial genomes is that the strength of sigma factor binding to a particular promoter depends on how closely it matches the consensus sequence. For the -35 region, this would be interpreted in terms of matches to the consensus sequence of TTGACA for E. coli promoters. The variety of -35 region sequences also is likely to be affected by the repeated use of promoter elements during evolution of the E. coli genome. Our study can be viewed as having removed this evolutionary historical bias, revealing the true E. coli consensus sequence by which promoter strength is determined. Removing the historical constraints leads to an alternative testable hypothesis in which the extent to which a particular promoter matches the pClone -35 consensus of GA(T/C)A(A/T)A determines its strength.
E. coli has six sigma factors in addition to sigma 70, that recognize -35 region sequences. Alternative sigma factors enable coordinate transcription from genes needed for responses to heat shock, starvation, and nitrogen limitation (Gross et al., 1998). The seven different sigma subunits in E. coli are used in a global regulatory capacity. Each of the E. coli sigma factors binds to a slightly different -35 region sequence. For example, the consensus sequence is TTGACA for sigma 70, CCCTTGAA for sigma 32, CTGGNA for sigma 54, and CTAAA for sigma F (Gruber et al., 2003). The E. coli genome contains more copies of the TTGACA -35 region sequence, since it is used for housekeeping genes. This biases the frequency distribution of -35 region sequences in favor of TTGACA (Harley & Reynolds, 1987; Lisser & Margalit, 1993). Our experimental approach may have removed the bias for TTGACA and allowed increases in the frequency of binding sites for one or more of the other six sigma factors. The result of alternative sigma factor binding would be a change in the consensus sequence away from TTGACA and toward the sequences required for alternative sigma factor binding. The in vivo binding sites for several alternative sigma factors have not been determined. If the RFP production differences between our pClone -35 consensus and the published one are due to interactions between our mutant promoters and alternative sigma factors, they will provide insight into the binding sites for alternative sigma factors. Since our pClone -35 consensus is GA(T/C)A(A/T)A compared to the sigma 70 binding site of TTGACA, the first, second, third, and sixth positions may be interacting with alternative sigma factors.
Contributing a New Promoter Set to Databases
In synthetic biology, a collection of promoter sequences of varying strengths can be a very valuable asset. To share our collection of promoters with the worldwide synthetic biology community of researchers, we contributed three pages to the Registry of Standard Biological Parts which contains the largest collection of synthetic biology DNA parts in existence, with over 7,000 DNA parts (MIT Working Group, 2005). We contributed a page to the Registry of Standard Biological Parts as part number J119375 (http://parts.igem.org/Part:BBa_J119375). The Psimp1 and Psimp2 promoters with 4 consensus variants each are presented in the Registry as part number J119376 (http://parts.igem.org/Part:BBa_J119376) and J119377 (http://parts.igem.org/Part:BBa_J119377), respectively. We also contributed information about our promoters to a new database called the Registry of Functional Promoters (RFP; http://gcat.davidson.edu/RFP/). RFP is a database that allows high school and undergraduate students to share information about promoters that they have cloned and studied using the pClone Red system. A third database to which we submitted information about the promoters is called the GCAT-alog (http://gcat.davidson.edu/GCATalog/). This database contains information about the freezer location of clones that synthetic biologists can access in order to share DNA parts and devices with each other.
Applications Using Promoters with Varying Strengths
The results of our study provided sequences and promoter strengths for a collection of 81 newly cloned promoters. An additional 19 promoters with changes in their -35 and -10 regions have been part of the Registry of Standard Biological Parts since 2004 (MIT Working Group, 2005). The sequence of each mutant promoter is presented along with relative strengths determined by RFP expression data (http://parts.igem.org/Part:BBa_J23100). There is also a collection of 14 promoters of varying strengths in a paper published in 2011 (Davis, Rubin, & Sauer, 2011). In synthetic biology, users can pick and choose promoters with different strengths for use in a wide range of synthetic biology applications. In the field of medicine, synthetic biology has been used to improve diagnosis and make medical treatments more effective with new drug discovery and drug delivery (Collins & Khalil, 2010). These applications rely on well characterized promoter functionality. The tuning of synthetic biology devices with promoters of various strengths is fundamental to the growing field of biological engineering. Synthetic biology is also being used to develop microbial systems for biofuel production (Deng, Guo, Liu, Tao, & Zhang, 2015). The production of biofuels using microbial cells requires promoters with various strengths to control the transcription of genes that encode enzymes in biofuel metabolic pathways. Metabolic engineers have also repurposed naturally occurring metabolism in order to convert inexpensive starting materials into valuable metabolites (Keasling, 2010). Promoters of various strengths can be used to balance the desired orthogonal metabolic output with the metabolic cost to the bacteria (Eckdahl et al., 2015).
Future Prospects for Mutational Analysis of Bacterial Gene Expression
The pClone Red system facilitates further investigation of the initiation of transcription in bacteria. Our results showing the effects of -35 region sequence context indicate that we do not fully understand the effects of sequences upstream and downstream of the -35 region. Production of pClone Red libraries with variations in nearby sequences would enable the collection of promoter strength data and the development of consensus sequences for the upstream and downstream elements. Our approach of producing a library of mutant promoters by mutation of the -35 region of Ptac could also be adapted to the -10 region of Ptac or another promoter. Comparison of the resulting consensus sequence to the published consensus of TATAAT would add insight into the role of the -10 region in the initiation of transcription. In addition, the pClone Red system could be used to study regulated promoters, in contrast to constitutive promoters such as Ptac. It is also possible for mutations to be introduced into Ptac or other promoters that reduce or increase the distance between the -35 region and the -10 region.
While constitutive promoters initiate transcription at a constant level, regulated promoters respond to internal or external cues. The regulation usually occurs by the binding of a regulatory protein to a DNA sequence found in or near a transcriptional promoter. Repressor regulatory proteins turn off the initiation of transcription while activators turn it on. Each of the two types of regulatory proteins can exist in either of two structural forms, one that binds DNA and one that does not. This mechanism occurs when small molecules bind to the regulatory proteins and affect which structural form they adopt. The pClone Red system could be used to interrogate the sequence requirements for repressor or activator binding or the effect of -35 region or the -10 region sequences on regulation.
We have extended our pClone Red approach to the design and construction of systems for mutational analysis of other aspects of the regulation of bacterial gene expression. The tClone Red plasmid employs GGA to easily clone transcriptional terminators found at the ends of bacterial genes. Terminators function by the formation of base-paired stem and loop structures in the RNA that destabilize its interaction with the DNA template, causing RNA polymerase to terminate transcription (Cambray et al., 2013). Terminators are found upstream of genes as part of riboswitches, which can control gene expression by adopting alternative RNA folding structures. Both terminators and riboswitches are useful in synthetic biology (Batey, Ceres, Garst, & Marcano-Velázquez, 2013). The tClone Red plasmid permits the study of structure/function relationships in terminators and riboswitches. We have also designed and built rClone Red, which allows the study of ribosome binding sequences, or RBSs. RBSs are found about 6 bases upstream of the start codon and control the initiation of translation by base pairing with the 5’ end of the 16S ribosomal RNA of the small ribosomal subunit (Laursen, Sørensen, Mortensen, & Sperling-Petersen, 2005). In addition, riboswitches can use RBSs, adopting a structure that sequesters them or makes them available. Collections of RBSs with various strengths and riboswitches that are controlled by various metabolites are very useful to synthetic biologists (Khalil & Collins, 2010). The rClone Red plasmid allows the production of these collections as well as mutational analysis of both RBSs and riboswitches.
We would like to thank Dr. Jay Meyers of Saint Joseph Central High School for his support and Dr. A. Malcolm Campbell of Davidson College for his advice on the experimental approach and helpful comments on the manuscript. Support from National Science Foundation (http://www.nsf.gov/) RUI grant MCB-1329350 to Missouri Western State University is gratefully acknowledged.
Beales, N. (2004). Adaptation of Microorganisms to Cold Temperatures, Weak Acid Preservatives, Low pH, and Osmotic Stress: A Review. Comprehensive Reviews in Food Science and Food Safety, 3(1), 1–20. http://doi.org/10.1111/j.1541-4337.2004.tb00057.x
Cambray, G., Guimaraes, J. C., Mutalik, V. K., Lam, C., Mai, Q.-A., Thimmaiah, T., . . . & Endy, D. (2013). Measurement and Modeling of Intrinsic Transcription Terminators. Nucleic Acids Research, 41(9), 5139–5148. http://doi.org/10.1093/nar/gkt163
Campbell, A. M., Eckdahl, T., Cronk, B., Andresen, C., Frederick, P., Huckuntod, S., . . . & Yuan, J. (2014). pClone: Synthetic Biology Tool Makes Promoter Research Accessible to Beginning Biology Students. CBE Life Sciences Education, 13(2), 285–296. http://doi.org/10.1187/cbe.13-09-0189
Ceres, P., Garst, A. D., Marcano-Velázquez, J. G., & Batey, R. T. (2013). Modularity of select riboswitch expression platforms enables facile engineering of novel genetic regulatory devices. ACS Synthetic Biology, 2(8), 463–472. http://doi.org/10.1021/sb4000096
Cho, I., & Blaser, M. J. (2012). The human microbiome: at the interface of health and disease. Nature Reviews Genetics, 13(4), 260–270. http://doi.org/10.1038/nrg3182
Davis, J. H., Rubin, A. J., & Sauer, R. T. (2011). Design, construction and characterization of a set of insulated bacterial promoters. Nucleic Acids Research, 39(3), 1131–1141. http://doi.org/10.1093/nar/gkq810
De Boer, H. A., Comstock, L. J., & Vasser, M. (1983). The tac promoter: A functional hybrid derived from the trp and lac promoters. Proceedings of the National Academy of Sciences of the United States of America, 80(1), 21–25.
De Mey, M., Maertens, J., Lequeux, G. J., Soetaert, W. K., & Vandamme, E. J. (2007a). Construction and model-based analysis of a promoter library for E. coli: an indispensable tool for metabolic engineering. BMC Biotechnology, 7, 34. http://doi.org/10.1186/1472-6750-7-34
De Mey, M., Maertens, J., Lequeux, G. J., Soetaert, W. K., & Vandamme, E. J. (2007b). Construction and model-based analysis of a promoter library for E. coli: an indispensable tool for metabolic engineering. BMC Biotechnology, 7, 34. http://doi.org/10.1186/1472-6750-7-34
Eckdahl, T. T., Campbell, A. M., Heyer, L. J., Poet, J .L., Blauch, D. N., Snyder, . . . Whitesides, E. T. (2015). Programmed evolution for optimization of orthogonal metabolic output in bacteria. PLoS One, 10(2), e0118322. http://doi.org/10.1371/journal.pone.0118322
Estrem, S. T., Gaal, T., Ross, W., & Gourse, R. L. (1998). Identification of an UP element consensus sequence for bacterial promoters. Proceedings of the National Academy of Sciences of the United States of America, 95(17), 9761–9766.
Gross, C. A., Chan, C., Dombroski, A., Gruber, T., Sharp, M., Tupy, J., & Young, B. (1998). The Functional and Regulatory Roles of Sigma Factors in Transcription. Cold Spring Harbor Symposia on Quantitative Biology, 63, 141–156. http://doi.org/10.1101/sqb.1998.63.141
Gruber, T. M., & Gross, C. A. (2003). Multiple Sigma Subunits and the Partitioning of Bacterial Transcription Space. Annual Review of Microbiology, 57(1), 441–466. http://doi.org/10.1146/annurev.micro.57.030502.090913
Gurdon, J. B., & Melton, D. A. (2008). Nuclear Reprogramming in Cells. Science, 322(5909), 1811–1815. http://doi.org/10.1126/science.1160810
Harley, C. B., & Reynolds, R. P. (1987). Analysis of E. coli promoter sequences. Nucleic Acids Research, 15(5), 2343–2361.
Keasling, J. D. (2010). Manufacturing molecules through metabolic engineering. Science, 330(6009), 1355–1358. http://doi.org/10.1126/science.1193990
Khalil, A. S., & Collins, J. J. (2010). Synthetic biology: applications come of age. Nature Reviews Genetics, 11(5), 367–79. http://doi.org/10.1038/nrg2775
Khalil, A. S., & Collins, J. J. (2010). Synthetic biology: applications come of age. Nature Reviews Genetics, 11(5), 367–379. http://doi.org/10.1038/nrg2775
Kobayashi, M., Nagata, K., & Ishihama, A. (1990). Promoter selectivity of Escherichia coli RNA polymerase: effect of base substitutions in the promoter −35 region on promoter strength. Nucleic Acids Research, 18(24), 7367–7372. http://doi.org/10.1093/nar/18.24.7367
Laursen, B. S., Sørensen, H. P., Mortensen, K. K., & Sperling-Petersen, H. U. (2005). Initiation of Protein Synthesis in Bacteria. Microbiology and Molecular Biology Reviews, 69(1), 101–123. http://doi.org/10.1128/MMBR.69.1.101-123.2005
Lisser, S., & Margalit, H. (1993). Compilation of E.coli mRNA promoter sequences. Nucleic Acids Research, 21(7), 1507–1516. http://doi.org/10.1093/nar/21.7.1507
López-Bigas, N., & Ouzounis, C. A. (2004). Genome-wide identification of genes likely to be involved in human genetic disease. Nucleic Acids Research, 32(10), 3108–3114. http://doi.org/10.1093/nar/gkh605
MIT Working Group. (2005). Registry of Standard Biological Parts. Retrieved September 2, 2015, from http://partsregistry.org/Main_Page
Ponnambalam, S., Webster, C., Bingham, A., & Busby, S. (1986). Transcription initiation at the Escherichia coli galactose operon promoters in the absence of the normal -35 region sequences. Journal of Biological Chemistry, 261(34), 16043–16048.
Ross, W., Aiyar, S. E., Salomon, J., & Gourse, R. L. (1998). Escherichia coli Promoters with UP Elements of Different Strengths: Modular Structure of Bacterial Promoters. Journal of Bacteriology, 180(20), 5375–5383.
Tao, H., Guo, D., Zhang, Y., Deng, Z., & Liu, T. (2015). Metabolic engineering of microbes for branched-chain biodiesel production with low-temperature property. Biotechnology for Biofuels, 8, 92. http://doi.org/10.1186/s13068-015-0270-7
Waterman, M. S., Arratia, R., & Galas, D. J. (1984). Pattern recognition in several sequences: Consensus and alignment. Bulletin of Mathematical Biology, 46(4), 515–527. http://doi.org/10.1007/BF02459500
Weber, E., Gruetzner, R., Werner, S., Engler, C., & Marillonnet, S. (2011). Assembly of designer TAL effectors by Golden Gate cloning. PLoS One, 6(5), e19722. http://doi.org/10.1371/journal.pone.0019722
Access Full PDF: Mutational Analysis of Transcriptional Initiation in Bacteria
A Meta-Analysis of the Effectiveness of Trauma-Focused Cognitive-Behavioral Therapy and Play Therapy for Child Victims of Abuse
Because of their widespread use, trauma-focused cognitive-behavioral therapy (TF-CBT) and play therapy (PT) were selected as treatments to be included in a meta-analytic study examining their effectiveness in aiding victims of child abuse. Studies analyzing the effectiveness of TF-CBT or PT were found using Academic Search Premier, PsycARTICLES, Psychological and Behavioral Sciences Collection, PsychINFO, Google Scholar, and Digital Dissertation. Ten studies (N = 762) were combined into two separate meta-analyses, one for TF-CBT and one for PT. These were coded, and four general outcome categories were identified: externalizing, internalizing, sexual, and parent report. Results showed an overall effect size of d = .212 for TF-CBT and d = .095 for PT. TF-CBT was more effective than PT on all outcome measures, except for externalizing behavior (PT d =.255 and TF-CBT d =.187). These results indicate that both TF-CBT and PT can be beneficial for victims of child abuse, though TF-CBT usually has more positive outcomes. Future studies should use a wait-list control group and random assignment.
Child maltreatment is a serious societal problem. In spite of this fact, researchers and practitioners do not agree on a single definition for child maltreatment, or even what the various forms of child maltreatment consist of (Barnett, Perrin, & Perrin, 2011; Dubowitz, Papas, Black, & Starr, 2002). Typically, the phrase “child maltreatment” is used to group the various forms of child abuse and neglect into one convenient term, with the most common forms of child abuse labels being physical, emotional/psychological, neglect, and sexual abuse. Each year millions of children are victims of this societal problem (Barnett et al., 2011; Trickett, Negriff, Ji, & Peckins, 2011).
While there are no agreed on definitions for each of the types of maltreatment there are common themes that categorize each. Child physical abuse (CPA) typically involves two standards: (a) the harm standard and (b) the endangerment standard. The harm standard recognizes that the child is a victim of CPA if the child has observable injuries for at least 48 hours (Barnett et al., 2011). The endangerment standard requires victims to be deemed at a substantial risk for injury or endangerment (Barnett et al., 2011). Emotional or psychological abuse is damaging the child by verbally or emotionally tearing the child down and objectifying the child. This can take many forms such as spurning, isolating, exploiting, rejecting, denying affection, and corrupting the child (Barnett et al., 2011).
Sexual abuse is very common in America, with one in four females and one in seven males being victims of sexual abuse prior to turning 18 (Misurell, Springer, & Tryon, 2011). Child sexual abuse (CSA) may involve fondling, masturbating, penetrating, vaginal/anal intercourse, taking nude pictures of the child, or even giving pornographic materials to a minor. While we will use this as the definition for CSA, it should be noted that there is no universal definition among experts (Barnett et al., 2011).
Child neglect is the most commonly reported form of child maltreatment (Barnett et al., 2011; Dubowitz, et al., 2002). Neglect varies from the other forms of maltreatment because rather than committing an act against a child (as in abuse), an important need of the child is not being met. Dubowitz et al. (2002) identified three fundamental needs which must be met for children: (a) perceived support from mothers, (b) lack of exposure to family conflict, and (c) early affection from mothers. Treatment for child maltreatment is vital because of the many long-term negative effects for many of the victims (e.g., suicidal ideation, sexual acting out, attachment issues, aggression, and many others; Easton, Coohey, O’leary, Zhang, & Hua, 2011; Hetzel-Riggin, Brausch, & Montgomery, 2007; Nolan et al., 2002; Saywitz, Mannarino, Berliner, & Cohen, 2000; Scott, Burlingame, Starling, Porter, & Lilly, 2003).
Types of Therapy
Millions of children each year are victims of these various forms of child maltreatment. As a result, there have been many forms of treatment developed to help victims of child maltreatment overcome or cope with the negative outcomes of that maltreatment. The two most commonly used types of therapies are play therapy (PT) and cognitive behavioral therapy (CBT; Kaduson, 2011; Misurell et al., 2011; Scott et al., 2003).
Play therapy involves mastering traumatic events, a process known as abreaction. Abreaction requires a reliving and mastering of the traumatic event, resulting in catharsis, defined as a release of affect that brings about relief (Nash & Schaefer, 2011).
Mechanics of play therapyPT tends to be a long therapy, lasting around 30 sessions (Scott et al., 2003). The usual age for a client is three to 12 years, but using PT to help adolescents and adults has been gaining traction in recent years (Nash & Schaefer, 2011). Therapists can practice PT in almost any setting. According to Nash and Schaefer (2011), the two most important aspects of a play space are predictability and consistency. This enables the children to focus on playing instead of exploring a new setting.
There are three main stages in PT: (a) rapport building, (b) working through, and (c) termination. Rapport building takes place during the first sessions. These sessions focus on building an appropriate therapist-client relationship by learning about the child, learning about his or her experiences, having the child learn about the playroom, and, depending on the therapist’s theoretical orientation, teaching the child about the therapy (Nash & Schaefer, 2011). The next phase, working through, is the longest. This is the phase in which most of the therapeutic change occurs. In this phase, play themes become apparent and gives clues about the child’s inner world. How these themes are utilized in therapy will depend on the theoretical orientation of the therapist; for example, a Jungian therapist will relate the themes to archetypes and a Freudian therapist will relate them to cathected libidinal energies. The final stage, termination, is for the child and family to recognize the changes, make the changes endemic to their family culture, and organize a path for continued improvements (Nash & Schaefer, 2011).
Types of play therapyPT can be divided into four broad categories based on the theoretical orientation of the therapist: psychodynamic, humanistic, systemic, and emerging models (Gil, 2011; Green, 2011; Kottman, 2011; Levy, 2011; Munns, 2011; Sweeney & Landreth, 2011). Each of these broad categories has many specific manifestations and practices. The different theoretical orientations may impact how therapy is conducted, what therapeutic tools are used, and how the therapist relates to the client (Gil, 2011; Ginsberg, 2011; Green, 2011; Kottman, 2011; Levy, 2011; Munns, 2011; Norton & Norton, 2011; Oaklander, 2011; O’Connor, 2011; Sweeney, 2011; Sweeney & Landreth, 2011; VanFleet, 2011).
It should be noted that many parents are trained to incorporate PT at home. Parents who have received the training to perform PT sessions in their own home have reported an increase in empathy and acceptance toward their children, as well as a reduction in parental stress levels (Nash & Schaefer, 2011). It has been noted that when parents use PT at home, children’s behavior, emotional adjustment and self-concept improves, while anxiety levels decrease (Nolan et al., 2002, Scott et al., 2003).
Cognitive-Behavioral Therapy (CBT) is the other most common form of psychotherapy used today, and is supported by a wealth of empirical evidence (Arnkoff & Glass, 1992; Grasso, Marquez, Joselow, & Webb, 2011; Saywitz et al., 2000). Because of the widespread use of CBT, many sub-forms of therapy have developed. Trauma-focused CBT (TF-CBT), which we examine in this meta-analysis, is one of the preferred methods of therapy for victims of trauma (Grasso et al., 2011).
CBT typically requires 12-16 sessions divided into three phases. In the first phase, the child and parent undergo gradual exposure to traumatic content, and the discussion becomes increasingly focused on the child’s personal experience in reaction to the stimulus. In the second phase the child begins to develop a detailed account of the traumatic event, called a “trauma narrative,” that functions as a means of therapeutic exposure and helps to facilitate emotional and cognitive processing. The parent meets with the therapist separately until the trauma narrative is complete. If the child is comfortable and the parent ready, the therapist encourages the child to share the narrative with the parent in a conjoined session. In the third phase, treatment focuses on safety skills and future development (Grasso et al., 2011). These sessions are designed to reduce behavioral and emotional problems from child trauma exposure. The sessions should also empower children and parents with a knowledge-base essential for therapy. Another goal of the sessions is to provide a skillset that includes education about trauma, posttraumatic stress, treatment rationale, relaxation techniques, emotion identification, regulation skills, and cognitive coping strategies.
History of CBT
CBT is a blending of cognitive and behavior therapy techniques that was fully developed by the 1950’s with the work on cognitive theory by Albert Ellis and the cognitive revolution, though its roots originated much further back than the behavior modification work of Pavlov, investigated by Watson and Skinner. Built on techniques for behavior therapy that have been around for most of recorded history (e.g., putting a spider in alcohol to prevent alcoholics from drinking, giving candy to reward positive behaviors, or spanking when a child hits someone; Glass & Arnkoff, 1992), CBT is a blending of behavioral and cognitive therapies that are used to treat a wide variety of psychopathologies.
Trauma-focused CBT (TF-CBT) is one of the most preferred subtypes of CBT for helping victims of trauma; Yet, many feel that the trauma narrative is very helpful for victims of all forms of abuse (Classen et al., 2011; Misurell et al., 2011). TF-CBT adds a trauma narrative to the therapeutic repertoire of CBT. This narrative is to help the victim be able to process the trauma and work through the emotions that arise. Additionally, TF-CBT focuses on the trauma and gears all education to the trauma and to similar traumas (Feather & Ronan, 2006).
To find articles for the meta-analysis, we searched Academic Search Premier, PsycARTICLES, Psychological and Behavioral Sciences Collection, PsychINFO, Google Scholar, and Digital Dissertation for “child,” “abuse,” “treatment,” “outcomes,” “play therapy,” “trauma focused,” “cognitive behavior therapy,” “CBT,” “PT,” in various aggregates and using various truncators (e.g., AND, OR, *). Articles that were included in the meta-analysis needed to (a) be published after 1980, (b) have a maximum age of 18 for participants undergoing therapy, (c) contain participants who were victims of physical or sexual abuse, (d) be a quantitative article that either reported effect sizes or instead contain enough data for effect sizes to be calculated, and (e) contain a report with treatment that was either PT or TF-CBT. All articles that met the inclusion criteria were used in the data analysis. We found 31 effect sizes in ten peer-reviewed articles and two doctoral dissertations that met the inclusion criteria; however, we noticed that the doctoral dissertations had samples that were identical to some of the peer-reviewed articles. We found that they were based on the same study, and in an effort to limit duplicate data, we only included the information from the peer-reviewed articles (Reams, 1987; Zion, 1999).
Table 1 displays some of the important variables that were coded. PT was coded as a zero and TF-CBT was coded as a one. Sexual abuse was coded as a zero and physical abuse was coded as a one.
We performed a meta-analysis of effect sizes to combine the various outcome measures used in the articles into five different outcome measures: global, internalizing, externalizing, sexual, and parent report. We decided on these measures because symptoms most often fall under these general categories according to the assessments the researchers gave participants (Carpentier, Silovsky, & Chaffin, 2006; Cohen, Deblinger, Mannarino, & Steer, 2004; Cohen & Mannarino 1998; Cohen, Mannarino, & Knudsen, 2005; Deblinger, Mannarino, Cohen, Runyon, & Steer, 2011; Feather & Ronan 2006; Grasso, Marquez, Joselow, & Webb 2011; Reams & Friedrich, 1994; Reyes & Asbrand, 2005; Scott, Burlingame et al., 2003). A meta-analysis is a procedure in which a set of studies can be combined into one larger study through statistical procedures (Cooper, 2010; Lipsey & Wilson, 2001). This permits the calculation of an overall effect size using all of the studies’ data, resulting in both a more convenient and more objective synopsis of the body of research. In its classical form researchers conducting a meta-analysis calculate an average effect size weighted by the sample size of each study. This gives greater influence to studies with larger sample sizes because a larger sample sizes tends to be more generalizable. A researcher conducting a meta-analysis typically tests the previous studies for heterogeneity in their effect sizes in order to determine if there is a systematic relationship between study characteristics (e.g., age of sample, type of intervention, sample size) and the strength of each study’s effect size.
When there is a choice between pre- and post-test comparison groups and another therapy as the comparison group, the other therapy comparison group is included in the meta-analysis and not the pre- and post-test comparison groups. When several outcome measures that are used to ascertain the same type of outcome are used in the same study the effect sizes are averaged into one effect size for that measure. For example, three effect sizes were calculated from data provided by Reyes and Asbrand (2005). Those effect sizes were then averaged into a single sexual measure. This process was done for all the overlapping measures in each study prior to performing any calculations in the meta-analysis. This was done so that no study would have a disproportionate weight in the final analysis. Separate meta-analyses were conducted for PT and TF-CBT because of the theoretical and practical differences.
Ten articles were found to meet the inclusion criteria and therefore were included in the meta-analysis. These articles consisted of three articles about PT, six about TF-CBT, and one that contained both. Authors of these studies reported 31 effect sizes: 12 effect sizes were for PT outcomes and 23 effect sizes were for TF-CBT outcomes. There were 11 effect sizes for internalizing outcomes (four PT and seven TF-CBT), 11 for externalizing outcomes (four PT and seven TF-CBT), seven for sexual outcomes (two PT and five TF-CBT), and six for parent reported measures (two PT and four TF-CBT). Table two displays the average effect sizes.
We found that TF-CBT had a global average of d = .212 (95% CI = [0.175,.250]), an internalizing average of d = .229 (95% CI = [0.152,.306]), an externalizing average of d = .187 (95% CI = [0.118, .0257]), a sexual average of d = .162 (95% CI = [0.272,.442]), and a parent report average of d = .357 (95% CI = [0.272,.442]). PT had a global average of d = .095 (95% CI = [0.032,.157]), an internalizing average of d =.096 (95% CI = [-0.013,.206]), an externalizing average of d = .255 (95% CI = [0.144,.366]), a sexual average of d = .042 (95% CI = [-0.098,.182]), and a parent report average of d = -0.15 (95% CI = [-0.304,.003]). These average effect sizes are reported in Table three. The table also shows that seven of the ten overall effect sizes were statistically significant (d < .05). All three effect sizes that were not statistically significant were for PT outcomes.
Analysis of Homogeneity
In accordance with standard practice (Cooper, 2010; Lipsey & Wilson, 2001), we performed a test of homogeneity of effect sizes. The effect sizes for the TF-CBT analysis were all homogenous, with Qglobal = 11.173 (df = 22, Qinternalizing = 1.01 (df = 6, p = .985), Qexternalizing = 2.476 (df = 6, p = .871), Qsexual =.362 (df = 4, p = .959), and Qparent = 3.746 (df = 3, p = .290). This shows that all of the effect sizes are statistically equal; we therefore deemed it unnecessary to examine if any study characteristics had an impact on the effect sizes. Figures one and two provide a visual representation of this homogeneity by displaying the effect sizes and CI’s for all outcome measures for both PT and TF-CBT.
The PT results were more heterogeneous, with Qglobal = 38.789 (df = 11, p < .001), Qinternalizing = 6.664 (df = 3, p = .083), Qexternalizing = 7.903 (df = 3, p = .048), Qsexual = 1.23 (df = 1, p = .267), and Qparent = 3.505 (df = 1, p = .061). These results indicate that the effect sizes are not statistically similar, but this is due entirely to the Carpentier et al. (2006) study. Carpentier and colleagues compared PT to TF-CBT and found that TF-CBT was more effective in helping the participants. This caused the effect sizes for PT to be negative, skewing the distribution of effect sizes in the PT results. When the Carpentier et al. (2006) article was eliminated from the results, all of the PT effect sizes were homogeneous. Because all heterogeneity was due to a single article, we did not find it necessary to conduct any of the causes of the heterogeneity. Supplemental figures that display the effect sizes for each of the outcome variables for both TF-CBT and PT are available from the journal’s online archive.
In this study we used meta-analytic techniques to calculate and analyze the effect sizes of the results of studies pertaining to the use of TF-CBT and PT as treatments for child victims of abuse. We found that both participants in both PT and TF-CBT groups had more positive outcomes at the end of the therapy than the control or comparison groups (e.g., other therapy comparisons, pretest/posttest, etc.). Additionally, we found that those who went through TF-CBT treatment had more improvement across all measures—except externalizing outcomes—after therapy than those who went through a PT treatment. This is supported by the 2006 Carpentier et al. article, which showed that TF-CBT was more effective than PT, producing more positive outcomes for participants.
We were not surprised that clients in PT treatments have more improvement with eternalizing outcomes than those clients who were in TF-CBT treatments. This is because PT relies on play as the mode for what is discussed in therapy (Nash & Schaefer, 2011). External behaviors may be more accessible to the therapist conducting PT because the PT therapist will be able to see maladaptive and abnormal behaviors in a play setting. For example, difficulty sharing, violent behaviors, or withdrawal may manifest as a child plays and may be easier for a therapist to target. Another factor that may be involved in this improvement could be that the play therapist sets limits on the child in response to externalizing behaviors. According to Nash and Schaefer (2011), these behaviors include unsafe behaviors, physical aggression, and socially unacceptable behaviors, all of which were classified as externalizing behaviors in this meta-analysis.
Homogeneity of Effect Sizes
A surprising finding was the homogeneity of the effect sizes, especially because in many research studies pretest/posttest designs tend to show larger effect sizes than studies with control groups (Kosters, Burlingame, Nachtigall, & Strauss, 2006). The null hypothesis of homogeneous effect sizes was retained for the TF-CBT studies, indicating that the differences among effect could be due solely to sampling error. There was heterogeneity among the effect sizes for PT, but this was due to the study (Carpentier et al, 2006). Because heterogeneity was largely not present, it was not necessary to conduct analysis on potential moderator variables. This finding is unusual because often heterogeneity among effect sizes if found among meta-analyses (Cooper, 2010).
The homogeneous effect sizes in this meta-analysis have three possible explanations. First, the sampling error does explain the variation among the effect sizes. Both therapies are effective across different age groups, populations, genders, and ethnicities. The second potential reason for homogeneity may be due to our decision to average effect sizes within studies if they measured the same type of outcome (e.g., externalizing outcomes, sexual outcomes). This may reduce variability of effect sizes and make the null hypothesis of homogeneity harder to reject. Another methodological artifact in the meta-analysis may arise from how the original studies were designed. Most, if not all, of the studies in this meta-analysis were designed and conducted by researchers who are practitioners or proponents of the therapies under investigation (Cohen & Mannarino, 2008). This may lead the original authors to inadvertently design studies that are simply more likely to show benefits of TF-CBT and PT. This would be a form of an expectancy effect (Neuman, 2012).
As with all meta-analyses, the validity of this study is only as strong as the studies included for analysis; “we may not like all of the ingredients that go into the sausage, but the [meta-analytic] chef can only work with the ingredients provided by the literature” (Thompson & Vacha-Haase, 2000). The total number of articles included for meta-analysis is small. This is due to the focused nature of this study. Many articles are available about PT, TF-CBT, and child abuse victims, but there are few empirical articles available about the effectiveness of these therapies with child survivors of abuse. The small number of studies was further reduced because TF-CBT and PT were analyzed in separate meta-analyses.
Because studies performed with a wait-list control group were completely absent from the literature, we were missing an ingredient in this meta-analytic “sausage.” The original authors of the articles in this meta-analysis always compared the TF-CBT or PT groups to either the same clients’ baseline scores of the outcomes or another group in a different therapy. This precludes the possibility of studying the absolute effectiveness of either therapy. Because sometimes, as the old proverb says, “time heals all wounds,” it is possible that the outcomes observed in this meta-analysis are not any more positive than what would be observed if the clients were on a wait-list. Conducting a true experiment (an experiment that requires a treatment and random assignment) with a pre- and posttest and a wait-list control group would provide the most rigorous results. However, not providing therapy to those who are suffering may be considered unethical for many therapists because early treatment is associated with greater positive outcomes for the client (Hetzel-Riggin et al., 2007).
Child victims of any form of trauma deserve to have the best treatments available to them, but the research on these therapies is not rigorous enough. TF-CBT and PT are the most widely used therapies in aiding this group of clients (Kaduson, 2011; Misurell et al., 2011; Scott et al., 2003), but we were surprised by the lack of empirically based studies that supported treatments for aiding child victims of abuse. Additionally, few studies reported the average age of abuse, length of abuse, severity of abuse, or even the relationship to the abuser. All of these variables may influence the client’s recovery. If researchers reported these variables, we might be able to examine the relationship between abuse length and treatment success, or if the abuse length impacted the effectiveness of a treatment in aiding the victim’s recovery. Logically, it is hypothesized that the more severe the abuse, the more severe the damage to the abused, and therefore, would impact treatment path. But if this information is not reported, then a relationship cannot be discovered.
Based on the results of this meta-analysis, when administered correctly, the idea that neither TF-CBT nor PT harms clients is greatly supported. Both forms of therapy produce more favorable results than the comparison groups. Because PT seems to have more effect on externalizing problems, it may be of more use when dealing with children who have more severe externalizing behavioral problems. However, if the child is having additional problems, then TF-CBT may be more effective. This is important information for those who regularly work with abused children.
We recommend several changes to how research on this sensitive population is conducted and reported in the future. First, researchers should collect and report data about child clients’ trauma etiology and any past history of abuse, including length and relationship to abuser. Second, when ethically possible, researchers should conduct true experiments with a waitlist control group. Finally, given the relatively small combined sample size in this meta-analysis (n = 762), we urge other researchers to consider replication studies (Makel & Plucker, 2014, and Warne, 2014,). This allows researchers to mitigate any methodological artifacts in previous studies and show the impact of treatment across several groups.
This research was funded by a grant awarded by Utah Valley University’s Student Undergraduate Research Fellowship (SURF). This article is based on the first author’s senior honors thesis. We acknowledge the feedback and support of Matthew Draper, Steven Clark, and Kate McPherson.
References marked with an asterisk indicate studies included in the meta-analysis.
Arnkoff, D. B., & Glass, C. R. (1992). Cognitive therapy and psychotherapy integration. In D. K. Freedheim (Ed.), History of psychotherapy (pp. 657-694). Washington, DC: American Psychological Association.
Barnett, O. W., Miller-Perrin, C. L., & Perrin, R. D. (2011). Family violence across the lifespan an introduction (3rd ed.). Thousand Oaks, CA: Sage.
*Carpentier, M. Y., Silovsky, J. F., & Chaffin, M. (2006). Randomized trial of treatment for children with sexual behavior problems: Ten-year follow-up. Journal of Counseling and Clinical Psychology, 74(3), 482-488.
Classen, C. C., Cavanaugh, C. E., Kaupp, J. W., Aggarwal, R., Palesh, O. G., Koopman, C., Kaupp, C., Kraemer, H. C., Aggarwal, R., & Spiegel, D. (2011). A comparison of trauma-focused and present-focused group therapy for survivors of childhood sexual abuse: A randomized controlled trial. Psychological Trauma: Theory, Research, Practice, and Policy, 3(1), 84-93.
*Cohen, J. A., Deblinger, E., Mannarino, A. P., & Steer, R. A. (2004). A multisite, randomized, controlled trial for children with sexual abuse-related PTSD symptoms. Journal of American Academic Child Adolescent Psychiatry, 41(4)3, 393-402.
Cohen, J. A., & Mannarino, A. P. (2008). Trauma-focused cognitive behavioral therapy for children and parents. Child and Adolescent Mental Health, 13(4), 158-162.
*Cohen, J.A., & Mannarino, A. P. (1998). Interventions for sexually abused children: Initial treatment outcome findings. Child Maltreatment, 3(1), 17-26.
*Cohen, J. A., Mannarino, A. P., & Knudsen, K. (2005). Treating sexually abused children: 1 year follow-up controlled trial. Child Abuse & Neglect, 29(2), 135-145.
Cooper, H. (2010). Research synthesis and meta-analysis. Thousand Oaks, CA: Sage.
*Deblinger, E., Mannarino, A. P., Cohen, J. A., Runyon, M. K., & Steer, R. A. (2011). Trauma-focused cognitive behavioral therapy for children: Impact of trauma narrative and treatment length. Depression and Anxiety, 28(1), 67-75.
Dubowitz, H., Newton R. R., Litrownik A. J., Lewis, T., Briggs, E. C., Thompson, R., English, D., Lee, L., Feerick, M. M. (2005). Examination of a concept model of child neglect. Child Maltreatment, 10(2), 173-189.
Dubowitz, H., Papas, M. A., Black, M. M., & Starr, R. H. (2002). Child neglect: Outcomes in high-risk urban preschoolers. Pediatrics, 109(6), 1100-1107.
Easton, S. D., Coohey, C., O’leary, P., Zhang, Y., & Hua, L. (2011). The effect of sexual abuse on psychosexual functioning during adulthood. The Journal of Family Violence, 26(1), 41-50.
*Feather, J. S., & Ronan, K. R. (2006). Trauma-focused cognitive behavioural therapy for abused children with posttraumatic stress disorder: A pilot study. New Zealand Journal of Psychology, 35(3), 132-145.
Gil, E. (2011). Family play therapy: Igniting creative energy, valuing metaphors, and making changes from the inside out. In C. Schaefer (Ed.), Foundations of play therapy (pp. 207-225). Hoboken, NJ: John Wiley & Sons.
Ginsberg, B., G. (2011). Congruence in nondirective play and filial therapy: Response to Ryan and Courtney. International Journal of Play Therapy, 20(3), 109-123.
Glass, C. R., & Arnkoff, D. B. (1992). Behavior therapy. In D. K. Freedheim (Ed.), History of psychotherapy (pp. 587-656). Washington, DC: American Psychological Association.
*Grasso, D., J., Marquez, Y., Joselow, B., & Webb, C. (2011). Evidence based case study trauma-focused cognitive behavioral therapy of a child with posttraumatic stress disorder. Psychotherapy, 48(2), 188-197.
Green, E., J. (2008). Reenvisioning Jungian analytical play therapy with child sexual assault survivors. International Journal of Play Therapy, 17(2), 102-121.
Green, E. J. (2011). Jungian analytical play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 61-85). Hoboken, NJ: John Wiley & Sons.
Hetzel-Riggin, M. D., Brausch, A. M., & Montgomery, B. S. (2007). A meta-analytic investigation of therapy modality outcomes for sexually abused children and adolescents: An exploratory study. Child Abuse & Neglect, 31(2), 125-141.
Kaduson, H. G. (2011). Release play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 105-126). Hoboken, NJ: John Wiley & Sons.
Kosters, M., Burlingame, G. M., Nachtigall, C., & Strauss, B. (2006). A meta-analytic review of the effectiveness of inpatient group psychotherapy. Group Dynamics: Theory, Research, and Practice. 10(2), 146-163.
Levy, A. J. (2011). Psychoanalytic approaches to play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 43-59). Hoboken, NJ: John Wiley & Sons.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304-316.
Misurell, J. R., Springer, C., & Tryon, W. W. (2011). Game-based cognitive-behavioral therapy (GB-CBT) group program for children who have experienced sexual abuse: A preliminary investigation. Journal of Child Sexual Abuse, 20(1), 14-36.
Munns, E. (2011). Theraplay: Attachment-enhancing play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 275-296). Hoboken, NJ: John Wiley & Sons.
Nash, J. B., & Schaefer, C. E. (2011). Play therapy: Basic concepts and practices. In C. Schaefer (Ed.), Foundations of play therapy (pp. 3-13). Hoboken, NJ: John Wiley & Sons.
Neuman, W. L. (3rd). (2012). Basics of social research: Qualitative and quantitative approaches. Upper Saddle River, NJ: Pearson.
Nolan, M., Carr, A., Fitzpatrick, C., O’Flaherty, A., Keary, K., Turner, R., O’Shea, D., Smyth, P., & Tobin, G. (2002). A comparison of two programmes for victims of child sexual abuse: A treatment outcome study. Child Abuse Review, 11(2), 103-123.
Norton, C. C., & Norton, B. E. (2011). Experiential play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 187-203). Hoboken, NJ: John Wiley & Sons.
Oaklander, V. (2011). Gestalt play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 171-186). Hoboken, NJ: John Wiley & Sons.
O’Connor, K. O. (2011). Ecosystemic play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 253-272). Hoboken, NJ: John Wiley & Sons.
Reams, R. (1987). The efficacy of time-limited play therapy with maltreated preschoolers (Unpublished doctoral dissertation). University of Washington, Seattle, WA.
*Reams, R., & Friedrich, W. (1994). The efficacy of time-limited play therapy with maltreated preschoolers. Journal of Clinical Psychology, 50(6), 889-899.
*Reyes, C. J., & Asbrand, J. P. (2005). A longitudinal study assessing trauma symptoms in sexually abused children engaged in play therapy. International Journal of Play Therapy, 14(2), 25-47.
Saywitz, K. J., Mannarino, A. P., Berliner, L., & Cohen, J. A. (2000). Treatment for sexually abused children and adolescents. American Psychologist, 55(9), 1040-1049.
*Scott, T. A., Burlingame, G., Starling, M., Porter, C., & Lilly, J. P. (2003). Effects of individual client-centered play therapy on sexually abused children’s mood, self-concept, and social competence. International Journal of Play Therapy, 12(1), 7-30.
Sweeney, D. S. (2011). Group play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 227-252). Hoboken, NJ: John Wiley & Sons.
Sweeney, D. S., & Landreth, G. L. (2011). Child-centered play therapy. In C. Schaefer (Ed.), Foundations of play therapy (pp. 129-152). Hoboken, NJ: John Wiley & Sons.
Swenson, C., C., Schaeffer, C., M., Henggeler, S., W., Faldowski, R., & Mayhew, A., M. (2010). Multisystemic therapy for child abuse and neglect: A randomized effectiveness trial. Journal of Family Psychology, 24(4), 497-507.
Tighe, A., Pistrang, N., Baruch, G., Casdagli, L., & Butler, S. (2012). Multisystemic therapy for young offenders: Families’ experiences of therapeutic processes and outcomes. Journal of Family Psychology, 26(2), 187-197.
Thompson, B., & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable. and Psychological Measurement, 60(2), 174-195.
Trickett, P. K., Negriff, S., Ji, J., & Peckins, M. (2011). Child maltreatment and adolescent development. Journal of Research on Adolescence, 21(1), 3-20.
United States Department of Health and Human Services. (2006). Child maltreatment 2004: Reports from the states to the National Child Abuse and Neglect Data Systems—National statistics on child abuse and neglect.
VanFleet, R. (2011). Filial therapy: Strengthening family relationships with the power of play. In C. Schaefer (Ed.), Foundations of play therapy (pp. 153-169). Hoboken, NJ: John Wiley & Sons.
Warne, R. T. (2014). Two additional suggested reforms to encourage replication studies in educational research. Educational Researcher, 43(9), 465.
Zion, T. (1999). Effects of individual client-centered play therapy on sexually-abused children’s mood, self-concept, and social competence (Unpublished doctoral dissertation). Brigham Young University, Provo, UT.
Bohr Revisited: Model and Spectral Lines of Helium
Abstract | Introduction | Helium Plus (He+) | Orbital Energy of Helium |Spectral Equation for Helium’s S Orbitals | Ionization Energy of Helium | Analysis | Results | Discussion |Acknowledgements | References |PDF
Quantum and atomic physics can often be an abstract and difficult subject for students to learn. Derived here is a simple model for helium and two-electron systems that may make some concepts, such as screening, easier for students to understand. We extend Bohr’s derivation, assuming a natural electron configuration, to explain some of the spectral lines of helium (He). We assume that the two electrons are at 180° apart in a circular orbit due to their mutual repulsion. The spectral lines calculated, namely the specific wavelengths of light emitted by elements when they are energized, correspond to an assumed possible scenario where the two electrons make orbital transitions in unison. In this model we calculate an energy prediction for the unionized helium atom. We then predict the first and second ionization energies. Error between experimental and predicted ionization energies is about five percent, indicating that our model is reasonably accurate within the aforementioned parameters.
In 1913, Niels Bohr introduced the Nobel Prize-winning quantized version of the Rutherford-Bohr model of hydrogen. The Bohr model supposed that electrons orbit the nucleus in rigid orbitals or paths. As energy was added to the atom, the electrons made transitions between orbitals. Energy would be released from the atom as light, and the wavelength of that light differed depending on the orbitals involved in the electron’s quantum transition. Bohr showed that this model gave quantitatively correct results by deriving the Rydberg Formula, which gives the wavelength of light emitted from the atom, again depending on the orbital transitions made by the electron (Bohr, 1913). Though the Bohr model of the atom accurately predicts the quantum nature of single electron atoms or ions, it is of little use in explaining more complex multi-electron atoms. Consequently the Bohr model was superseded by modern quantum mechanics where electrons in atoms are described using complicated wave functions. Here, we re-examine the helium (He) atom through the lens of the Bohr model in a way that may help students more easily understand multi-electron atoms and ions as opposed to using complex wave functions. Effects such as screening, the reduction of attraction to the nucleus via mutual repulsion, can then be simply accounted for when the electrons are considered as particles. Subsequently we explore the models use in conjunction with a scenario in which the two electrons transition between orbitals in unison to give roughly quantitative predictions.
Helium Plus (He+)
For hydrogen-like ions, Bohr demonstrated that the Rydberg Formula for the wavelength of emitted light, namely
, where RH is the Rydberg constant, Z is the atomic number, and λ is the wavelength of light emitted, could be explained by the energy differences between the quantized electron energies n. Since the Bohr model applies to hydrogen-like atoms, i.e., single-electron atoms, for the case of He+, Z=2 and RHZ2 = 4.38949264 x 107 m-1. We can use this equation to calculate the ionization potential of He+, the amount of energy required to remove that second electron from orbit, by combining [Eq. 1] with the equation for energy of a photon,
, where h is Plank’s constant and c is the speed of light. We will use the case where n2 = ∞, with n2 being the orbital infinitely far from the nucleus and its energy being approximately equal to zero, in order to ionize the electron completely. Substituting [Eq. (1)] into [Eq. (2)] under this condition gives
. If the ion is originally in its ground state, n = 1, the result is well known and agrees reasonably well, with an error of 0.02% when compared to experimental measurements (Table 3; Charlotte E. M., 1949).
Orbital Energy of Helium
The purpose of this brief note is to show what happens if one assumes two electrons in neutral helium are diametrically opposite in a single circular orbit. First then, due to screening, the net centripetal force on each of the two electrons is equal to the attraction force minus the repulsive force This is then equal to where Kc is Coulomb’s constant, e is the charge on an electron, r is the radius of the circular orbit, the factor of two in the attraction force ()is due to the two protons in the nucleus, and the factor of two in the repulsion force () is due to the fact that the electrons are separated by the length of two radii. This gives the electric force,
. Kinetic energy then derives from the centripetal force
, with m being the mass of an electron and v being its velocity. For each electron, kinetic energy may be expressed as . Electric force [Eq. (4)] can then be substituted for Fc in [Eq. (5)] to give kinetic energy as
for each electron. The net kinetic energy in the two-electron system must then be . Potential energy also results from the two attractive forces and a repulsive force, yielding . The total energy of the atom is the sum of potential and kinetic energies
. The next step is to derive the radius of the electrons’ orbit using Bohr’s angular momentum postulate with the principle quantum number n, namely . We solve first for v, and then substitute it into , which is the combination of [Eq. (4)] and [Eq. (5)], to yield the quantized relation . The radius becomes
, as the quantized, two-electron energy, assuming the two electrons circulate together a diametrically opposite manner.
Spectral Equation for Helium’s S Orbitals
The following derivation is to find the wavelength λ of light emitted when two electrons make a simultaneous transition, we can use [Eq. (2)] and [Eq. (9)], along with the difference in orbital energies, , which, when expanded, comes to the form of and then
. The constant here has a numerical value of , in SI units. As shown in Table 1, [Eq. (10)] gives approximately 5.5% error when compared to experimental values. However, one can use the form described in [Eq. (10)] to fit a series of spectral lines that could appear in helium’s spectrum. This fit can be used to find a Rydberg-type constant as seen in [Eq.(1)] to fit the experimental data more accurately, as demonstrated in Figure 1. We can think of this fit as correcting for an adjustment to the columbic forces interacting, because the forces may slightly vary over time.
Ionization Energy of Helium
In this section we derive the equations that predict the ionization energy of neutral helium to helium plus (He+). The ground state energy from [Eq. (9)] is . We can calculate the energy required to ionize the first electron by starting with the kinetic energy of one electron [Eq. (6)]: . Potential energy for one electron in the system can be given as, . The total energy is then . Radius from [Eq. (8)] is then substituted into the radius of the electron’s circular orbit to give
We use [Eq. (10)], which follows this Bohr-like model’s assumptions, to calculate a series of lines in the visible portion of helium’s emission spectrum. The series results from various two-electron simultaneous transitions to the state with principle quantum number n=5. Percent error is calculated based on the experimental lines that these could represent, shown in Table 1 (Richard & Peter, 2000). We then determine an effective constant consistent with experimental results, which may be thought of as an adjustment for the electron correlation and screening (Figure 1). Subsequently, we show the extent of agreement between experimental data with [Eq. (10)] using the modified constant as derived from Figure 1 (Table 2). The radius of helium in its ground state, i.e. the lowest attainable energy and orbit, calculated to be 30.24pm from [Eq. (8)] compared within a reasonable degree of error of two percent with the accepted experimental value of 31pm (Charlotte E. M., 1949).
In this Bohr-like model of the atom, the two electrons in helium are assumed to be 180° from each other and making quantum transitions concurrently. Though this quasi-classical model cannot explain all of the spectral lines, it does present one possibility for a quantum transition scenario of the S orbitals, i.e. the spherical electron orbital in an atom. Another possible transition scenario is a model of excited helium where one electron is in ground state and the other is excited, in which some shielding will take effect, changing the energy emitted as the excited electron transitions to a lower state. One of the most influential factors in why the Bohr and Bohr-like models were abandoned was due to the discrepancies between experimental data and theoretical predictions. The predicted ionization energy of helium to He+ has a significant percent error when compared to measured results (Charlotte E. M., 1949). This may be due to the fact that the Bohr model does not account for the quantum spin of the electrons. Deeney and O’Leary (2012) discuss in detail how in the modern theory, electrons with aligned spins, i.e., the triplet state, have the lowest attainable energy. However, the Pauli Exclusion Principle forbids this possibility, resulting in a state of higher energy with anti-parallel electrons, which in turn would yield lower ionization energies as shown in Table 3. Although the Bohr model has since been superseded by the Schrödinger theory, it is hoped that this modified model could be of use to help students understand and conceptualize the quantum effects of screening and its interactions in multi-electron atoms and ions.
The author would like to thank his mentors, Dennis Symalla and Dr. William Schwalm, for their indispensable ideas and support.
Bohr, N. (1913). On the Constitution of Atoms and Molecules. Philosophical Magazine, 26, 1
Charlotte E. M. (1949). Atomic Energy Levels Vol. 1. Circular of the National Bureau of Standards
Deeney, F. A., & O’Leary, J. P. (2012). The Effects of the Pauli Exclusion Principle in Determining the Ionization Energies of the Helium atom and Helium-like Ions. European Journal of Physics, 33, 667-675
Gasiorowicz, S. (2003). Quantum Physics (3rd ed.). New York, NY: Wiley Richard, P., & Peter L. (2000). Optical Emission Lines of the Elements.
England, Wiley & Sons Ltd Zattili, N. (2009). Quantum Mechanics: concepts and applications 2nd ed. New York, NY: Wiley
Access Full PDF: Bohr Revisited: Model and Spectral Lines of Helium
Gene Therapy: A Toolkit for Targeting Cancer
Cancer causes one in seven deaths worldwide, making it one of the most important issues in the world of biotechnology today. Current cancer therapies, including chemotherapy and radiotherapy, have severe side effects and often prove ineffective at completely eradicating malignant cells. Therefore, a more selective method of targeting tumour cells must be designed. Gene therapy holds great potential to selectively target cancer cells, allowing the treatment to effectively destroy the cancer while leaving healthy tissues intact. In order to develop a gene therapy treatment, two main obstacles must be overcome: a therapeutic agent must be developed to facilitate genetic changes, and a delivery method must be optimized to insert the therapeutic agent into target cells. Recent advancements in both the design of the therapeutic agent and the delivery method allow changes in both the genome and in gene expression to be performed in the target cells with a high degree of accuracy and efficiency. This review highlights several evolving technologies currently being developed for gene therapy, as well as strategies that could be employed using these technologies to treat cancer. Although not currently in widespread use, gene therapy is extremely promising as a treatment for cancer.
Cancer is estimated to cause about one in seven deaths worldwide (American Cancer Society, 2016). This disease is characterized by the transformation of healthy tissue into malignant and invasive tumour tissue that rapidly divides and takes up resources needed by other cells (Hanahan & Weinberg, 2011). Although effective treatments for some forms of cancer are available, losses are astounding; in 2012 there were over 14 million new cases of cancer, and 8.2 million cancer related deaths (Siegel, Miller, & Jemal, 2015).
There are hallmarks common to cancer tissue which can be used to differentiate malignant cells from healthy ones. These include dysregulation of the cell cycle, cell growth and differentiation, and apoptosis (Hanahan & Weinberg, 2011). Cancer spreads between different parts of the body after the earliest stages in a process known as metastasis (Hanahan & Weinberg, 2011). Metastasis adds a layer of complexity to treatment because metastatic cancers are both mobile and heterogeneous. One of the greatest challenges in treating cancer is the extreme heterogeneity of the disease; different types of tumours often display drastically different genetic and phenotypic characteristics. This heterogeneity ultimately makes cancers highly resistant to almost all forms of treatment (Hanahan & Weinberg, 2011).
The goal of cancer therapy is to specifically eliminate cancerous tissue while minimally impacting healthy tissue, and to maintain the effects of therapy over time without harmful side effects. Currently, common treatment options for cancer include chemotherapy, radiotherapy, and surgery. Chemotherapy uses chemical agents to target cells undergoing DNA replication, a strategy which effectively kills rapidly proliferating cells (Urruticoechea et al., 2010). Radiotherapy also targets rapidly growing cells, but uses radiation rather than chemical agents to kill tumour cells. However, tumour cells often prove resistant to chemotherapy and radiotherapy due to their increased viability and growth rate relative to healthy cells (Kaliberov & Buchsbaum, 2012). Surgery involves the removal of malignant tissues, an approach which is very effective at removing large solid tumours but is often limited in treating small or metastatic tumours (Urruticoechea et al., 2010). Clearly, there is a need for new cancer treatments which are highly effective at targeting cancer cells, while being selective enough to leave healthy cells untouched.
Unlike chemotherapy and radiotherapy, gene therapy has the potential to target tumour cells with a high degree of accuracy. However, current gene therapies to treat cancer are still in experimental stages. There are many technologies available and it can be confusing to distinguish between them and evaluate the different strategies. This article will review progress in the gene therapy field along with potential applications to cancer.
Genetic Basis of Cancer
The abnormal regulation of cancer cells is caused by mutations in two classes of potentially harmful genes: proto-oncogenes and tumour suppressors. Proto-oncogenes become cancer-causing oncogenes through ‘gain-of-function’ mutations which increase the impact of the gene to harmful levels by, for example, creating a constitutively active protein product. ‘Loss of function’ mutations in tumour suppressor genes also facilitate cancer development, as they prevent the gene from producing enough of a useful protein product that contributes to growth regulation. Tumour suppressor genes perform functions such as arresting cell division or initiating apoptosis. As such, loss-of-function mutations in tumour suppressor genes contribute to the uncontrolled proliferation observed in cancer (Hanahan & Weinberg, 2011).
Gene therapy can be used to target both oncogenes and tumour suppressor genes. Treatments for the former seek to counter or inactivate gene expression and to lower gene product formation, while treatments for the latter seek to restore gene expression. Gene therapy has been successful in treating genetic diseases with single gene defects, including immune disorders (e.g. Wiskott-Aldrich syndrome), blood cell disorders (e.g. beta-thalassaemia) and metabolism disorders (e.g. X-linked adrenoleukodystrophy) (Braun et al., 2014; Cartier et al., 2012; Malik & Arumugam, 2005). Gene therapy can be targeted to both germ and somatic cells, meaning that it is possible for genetic alterations to be passed on to children.
Gene Therapy Strategy
In order to modulate the expression of oncogenes and tumour suppressors, two main challenges must be overcome. The first challenge is to create a therapeutic agent which alters gene expression, and the second challenge is to deliver the desired therapeutic agent to the target cells. In the early years of gene therapy, both of these challenges were met in a trial that tested the use of a retroviral delivery method of a BRCA1 tumour suppressor gene splice variant to treat breast cancer (Tait, Obermiller, Hatmaker, Redlin-frazier, & Holt, 1999). During Phase I clinical trials, patients showed tumour suppression with highly effective gene transfer and little immune response, particularly in small tumours. However, Phase II patients showed no response to this form of therapy, because they had developed antibodies against the retroviral envelope (Tait et al., 1999). This immune response is a common shortfall of retroviral vectors (Wu & Dunbar, 2011).
Another well-known study used bone marrow cells transduced with a retrovirus containing a therapeutic gene to treat X-linked immunodeficiency (SCID-X1). Although immune cell count and function seemed to have been normalized in the adolescent patients, four of the nine treated patients developed leukemia in later stages of the treatment (Hacein-Bey-Abina et al., 2008). The cause of this drastic side effect was found to be retroviral-mediated insertion of the therapeutic γC cytokine receptor into an unintended target, which altered a proto-oncogene involved in T cell self-renewal and differentiation (McCormack et al., 2010). Since regulation of T-cell fate was disrupted, the treated cells started to multiply uncontrollably, ultimately leading to cancer.
In order to avoid unintended effects, new methods of gene therapy must target pathogenic mutations in a highly specific manner. Unlike the retroviral insertion strategy used in the SCID-X1 trials, therapeutic genes must be delivered to a specific area of the genome or transcriptome. As gene therapy progresses, safer and more effective choices of therapeutic agents and delivery methods must be explored to avoid the shortfalls of viral vectors as outlined above. Determining the appropriate gene targets will require a more comprehensive understanding of different tumours at a molecular level, a challenge which is outside of the scope of this review. This article will focus on possible approaches to combat cancer once an important proto-oncogene or tumour suppressor gene has been identified. This review covers several promising methods of introducing or silencing target genes once they have been identified, as well as how these methods may be used in combination to optimize the effectiveness of the treatment.
Although the field of gene therapy has advanced rapidly in the past few decades, there are several issues still hindering the development of effective gene therapy treatments. These problems include low efficiency of gene transfer, failure to deliver genes larger than 5kb in size, regulation of transgene expression in the host cells, and ineffectiveness against autosomal disorders (Yang & Walsh, 2005). This review aims to explain some of the most useful components of the gene therapy toolkit that are applicable towards treating cancers, and catalogue their strengths along with their shortfalls.
Tools for Gene Therapy
Part 1: Designing the Therapeutic Agent
The first challenge after identifying the target gene to be used for gene therapy is to construct a therapeutic agent to alter the expression of the gene of interest. This therapeutic agent must be designed with three main concerns in mind: specificity, efficiency, and transiency. Specificity refers to the ability of the therapeutic agent to alter the expression of the target gene in the target cells without causing undesirable mutations in other parts of the genome or in healthy tissues. Efficiency is primarily concerned with the ability of the therapeutic agent to modify the genome or gene expression at a low dosage. Another important factor is the transiency of treatment, i.e. how long the alterations in gene expression will last in the patient. There are advantages and disadvantages to both transient and lasting variations of gene therapy, and the choice of therapeutic strategy should be informed by the individual case; specificity and efficacy, on the other hand, should always be maximized in order to give the best possible outcome.
SmaRT: Gene Silencing through Pre-mRNA
Spliceosome-mediated RNA trans-splicing (SmaRT) is a technique which corrects mRNA after transcription. Trans-splicing occurs through ligation of exons from different transcripts. Cis-splicing, on the other hand, refers to the conventional eukaryotic mechanism of splicing of a single transcript. Pre-mRNA trans-splicing molecules (PTMs) can be designed to carry a binding domain that targets a specific intron in the normal pre-mRNA (Yang & Walsh, 2005). Together with the spliceosome, the PTM can cause trans-splicing of pre-mRNA with up to 80% efficiency as compared to cis-splicing (Yang & Walsh, 2005). It can be used to repair the mRNA transcripts of aberrant genes, but the utility is that the effects are relatively reversible. However, designing the PTM is not easy. One approach is to randomly generate a collection of sequences, then use fluorescence-activated cell sorting (FACS) of cells containing these sequences to identify the optimal construct (Yang & Walsh, 2005). In this case, each PTM has a portion of a gene which encodes the green fluorescent protein, and FACS allows for selection of successful trans-splicing.
TALENs and ZFNs: Site-Specific Modification
While SmaRT targets pre-mRNA or mRNA and is therefore reversible, other methods of gene therapy cause more permanent alterations by modifying the genome itself. Chimeric nucleases are molecules composed of a sequence-specific domain fused to a nonspecific DNA cleavage molecule, and are capable of inducing double-strand breaks (DSBs) at specific sites in the genome (Gaj, Gersbach, & Barbas, 2013). Unlike SmaRT systems, chimeric nucleases cause permanent changes to the DNA of the target cells, which persist long after expression of the nucleases themselves has ceased.
Zinc finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) are two types of chimeric nucleases, each with a distinct class of customizable sequence-specific domain (Gaj et al., 2013). ZFNs have multiple zinc finger domains, structural motifs containing zinc ions, and a conserved ββα motif. Residues on the surface of the α-helix bind to three base pairs on the major groove of DNA, allowing each zinc finger domain to recognize a nucleotide triplet. ZFNs contain an array of zinc finger domains which together can recognize a specific sequence (Gaj et al., 2013). Each zinc finger domain recognizes a specific nucleotide triplet rather than a single base pair. As a result, the zinc finger array can bind to certain sequences with high specificity; however, only certain nucleotide triplets have corresponding zinc finger domains. Therefore, the number of sequences to which ZFNs can bind is limited (Osborn et al., 2013). TALENs, by contrast, are composed of an array of amino acid repeat domains, each of which recognizes only a single base pair. Therefore, TALENs can be engineered to recognize any DNA sequence (Osborn et al., 2013). However, TALENs are more expensive to make than ZFNs because the amino acid repeat domains are complex and difficult to synthesize (Gaj et al., 2013).
Both ZFNs and TALENs can be used to induce DSBs at specific loci with similar frequencies (Hockemeyer et al., 2011). The creation of DSBs can be used to excise sequences from the genome or to insert donor DNA. To remove a sequence (e.g. a harmful oncogene), DSBs can be induced at either end of the sequence and the free ends ligated, eliminating the DNA between the DSBs. To insert a sequence (e.g. a helpful tumour suppressor), the desired DNA sequence can be introduced to the cell along with a chimeric nuclease that makes a single DSB, allowing the donor sequence to be ligated into the DNA. These methods lead to specific and long-lasting genetic modification (Hockemeyer et al., 2011). However, TALENs and ZFNs are both extremely time- and resource-intensive.
Viral Vectors: High-Level Transgene Expression
Viruses proliferate by using the cellular machinery of host cells for the expression of viral proteins and replication of the viral genome. Engineered viruses present an efficient delivery system (see ‘Delivery Method’ below) as well as a high level of expression of transgenes (Waehler, Russell, & Curiel, 2007). Viral vectors can be designed by replacing portions of the viral DNA with the transgene of interest coupled with a promoter which will allow the transgene to be expressed once inside the host cell (Kootstra & Verma, 2003). To prevent the engineered virus from threatening healthy host cells, all portions of the viral genome which are not essential for infection of the host cells are deleted, including genes encoding the viral capsid and other virulence factors. Viral vectors can be used to induce short-term or long-term expression of transgenes, depending on the type of vector used (Kootstra & Verma, 2003). Retroviral vectors insert transgenes into the host genome, causing long-term expression of the genes. In contrast, adenoviral vectors do not integrate into host DNA, so the viral transgenes are expressed and replicated independently of the host genome. These non-chromosomal pieces of genetic material are referred to as ‘episomes’. Because these episomes degrade over a period of weeks to months (depending on the activity of DNA repair and destruction pathways), adenoviral vectors result in only short-term expression of transgenes (Kootstra & Verma, 2003). Short-term expression of transgenes may be useful in some cases, but cannot permanently correct harmful genetic defects. Long-term genetic modification of the host genome via retroviral vectors is one alternative, but poses significant risks: integration of the transgene into the genome can induce mutations at off-target sites in the genome, a process called ‘insertional mutagenesis’ (Wu & Dunbar, 2011). This can result in harmful side effects, including dysregulation of proto-oncogenes and tumour suppressors, as in the SCID-X1 trials.
CRISPR: Targeted Gene Regulation
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) are a bacterial defense mechanism for destroying foreign DNA (Marraffini & Sontheimer, 2010). Currently CRISPR is the most attractive therapeutic agent available for gene editing and silencing in mammalian cells. It remains relatively specific and simple to design compared to other genome editing technologies such as TALENs and zinc finger nucleases. This technology requires only two types of DNA molecules which can be encoded on the same construct and introduced into a variety of hosts: a molecule that encodes Cas (CRISPR-associated system) endonuclease, whose role is to induce a double stranded break, as well as a molecule to encode guide RNA (gRNA) which guides the nuclease to target a specific DNA or RNA element. gRNAs are customizable towards each target, allowing for high versatility. Cas9 is a variety of Cas from the bacteria Streptococcus pyogenes (Wilkinson & Wiedenheft, 2014). Cas9 combined with a small gRNA that targets the promoter region of a gene can cause up to 100-fold repression (Qi et al., 2013). Several recent innovations, including converting Cas9 into a nickase enzyme and using truncated gRNAs, have further increased the specificity of the CRISPR/Cas9 system (Gori et al., 2015). As a result, this technique minimizes the risk of unwanted mutations in the target cells. In addition, the CRISPR/Cas9 system can be used to create targeted modifications at multiple different loci using only one transgene (Dow et al., 2015). The specificity, efficacy and versatility of CRISPR all contribute to the vast potential of this technique in clinical applications.
Part 2: Customizing the Delivery Method
Equally important to selecting the best therapeutic agent out of those available is the task of finding an effective delivery method for introducing each agent. The ideal method must be able to deliver the therapeutic agent to specific tissues where therapy is needed, but must have minimal toxic side effects. The methods explored in this section are summarized in Figure 1.
Liposomes are enclosed phospholipid bilayer structures which can encapsulate oligonucleotides or drugs for delivery. Their size and contents are highly customizable, and they can be targeted to a specific cell type through receptor-mediated endocytosis. Leakage, specificity of targeting, half-life, toxicity versus efficacy (therapeutic index), and success of delivery across the cell membrane are key issues which still need to be overcome in this delivery method (Allen & Cullis, 2013). In order to overcome physiological barriers to access tumours, the ideal liposome would minimize leakage, release its contents only in malignant tissues, have a long half-live in vivo, be non-toxic to healthy cells, and transfect cancer cells with a high degree of efficiency.
Hydrogels are synthetic pockets formed by the assembly or crosslinking of hydrophilic polymers whose pores may be used to carry therapeutic agents. As a delivery system hydrogels can improve vector transfer into specific cells in vivo because they can release their contents at a rate controlled by the speed of diffusion through the hydrogel (Caló & Khutoryanskiy, 2014). Hydrogel encasement increases vector stability, shields vectors from immune effects, and modulates the length and location of vector delivery (Seidlits, Gower, Shepard, & Shea, 2014). Though their versatility is a strength, the exact properties of the hydrogel system must be fine-tuned and tested before each application to ensure maximum efficiency. These features include but are not limited to pore size, hydrogel shape, charge, pH, biomimetic properties, architecture, and degradation time (Caló & Khutoryanskiy, 2014).
Viral Vector Delivery Systems
Pathogenic viruses have evolved to efficiently infect, or ‘transduce’, target cells (Kootstra & Verma, 2003). Viruses can be engineered to deliver therapeutic agents such as toxic genes or CRISPR, a strategy which is discussed later in this review. The transduction efficiency of viral delivery systems is very high compared to non-viral methods of gene delivery such as electroporation or tissue particle bombardment, but viruses also pose significant risks (Kootstra & Verma, 2003). To minimize these risks, it is necessary to target viral delivery systems to diseased cells. They will infect cells based on the tropism, or affinity for a given cell type, of the virus (Waehler et al., 2007). Viral envelopes can be engineered in a variety of different ways to target desired tissues. Gene transfer using viral vectors can occur in vivo, with the vectors either applied locally or introduced systemically. However, viruses may also be applied to stem cells which are then reintroduced to the patient (see ‘cell-based delivery’ below; Waehler et al., 2007).
Electroporation is a method to introduce aqueous pores into cell membranes by exposing cells to pulsing electric fields (Yarmush, Golberg, Serša, Kotnik, & Miklavčič, 2014). Pores in the membrane bilayer typically remain open for milliseconds but may last for several minutes (Yarmush et al., 2014). These aqueous pores allow passage of genetic material through the membrane bilayer, transfecting the targeted tissues (Kalli, Teoh, & Leen, 2014). The electric fields required for electroporation can be produced by externally applied electrodes, and have been shown to cause no long-term side effects, though in the short term patients experience pain and muscle contractions. Another potential downside is that the transfection efficiency of electroporation varies depending on the type of tissue, and is generally lower than the transduction efficiency achieved by viral vectors (Yarmush et al., 2014).
Tissue particle bombardment
Another method of transfecting cells is tissue particle bombardment, also called the ‘gene gun’. Plasmids containing recombinant genes for transfection are coated onto inert gold particles 0.5-3µm in diameter, which are then accelerated to high speeds using either electrodes or pressurized helium (Kitagawa, Iwazawa, Robbins, Lotze, & Tahara, 2003; Yang, Burkholder, Roberts, Martinell, & McCabe, 1990). The particles are then fired at cells at high velocity, allowing them to penetrate the cell membranes and deliver the recombinant genes into the cytoplasm. The size, density and velocity of the particles can be adjusted to allow them to penetrate different types of tissue at different depths, ensuring that only the targeted cells will be transfected (Yang et al., 1990). The transfection efficiency of tissue particle bombardment is very low and usually leads to only transient expression (Kitagawa et al., 2003). This is because the genes carried by the gold particles are not incorporated by the genome but are instead expressed as episomes, which are degraded over time (Kitagawa et al., 2003).
One method of ensuring lasting, specific expression of a therapeutic agent is to use stem cells as a method of vector delivery (Sorrentino, 2002). In stem cell gene therapy, adult or embryonic stem cells are isolated in vitro and transfected or transduced with the genes of interest, then introduced into the patient. The stem cells used can be autologous (i.e. isolated from and reintroduced to the same patient), or allogeneic, (i.e. obtained from a different donor) (Wu & Dunbar, 2011). Engineering isolated cells in vitro is generally easier than attempting to transfect or transduce cells in vivo. Once introduced, all cell lineages that derive from the transgenic cells will carry the genes of interest. Because stem cells are capable of proliferating indefinitely, the transgenic cells will continue to deliver the gene therapy for the remainder of the patient’s life (Sorrentino, 2002). This makes it possible to permanently correct disease phenotypes; however, the permanent nature of the stem cells also poses significant risks. If the transgenic stem cells acquire unwanted mutations, they can proliferate unsustainably and form tumours (Wu & Dunbar, 2011). Once reintroduced to the patient, engineered stem cells could be very difficult to eliminate in case of harmful side effects. However, these risks can be mitigated by implementing improved protocols for the engineering and screening of stem cells in vitro (Wu & Dunbar, 2011). Rapid advancements in the field of induced pluripotent stem cells and tissue-specific stem cells are making it increasingly possible to engineer stem cells in vitro effectively and without causing unwanted side effects.
Gene Therapy to Target Cancer
Cancer is a difficult disease to treat for several reasons: malignant cells live amongst non-cancerous and often essential tissues; they can metastasize to relocate to other parts of the body; and most forms of cancer have some degree of resistance towards current treatments (Hanahan & Weinberg, 2011) In order to overcome these challenges, a therapy must be chosen which allows for the selective targeting of cancer cells over healthy cells. Ideally, this method could be administered systemically rather than locally, so as to counter metastases in addition to the primary tumour. In addition, it is important to use therapies to which cancer cells cannot easily develop a high degree of resistance. Notably, malignant cells exhibit a range of genetic and phenotypic abnormalities which can allow them to be selectively targeted using the gene therapy toolkit (McCormick, 2001). Several gene therapy techniques aimed at destroying cancer cells are under development. The next section of this review will draw upon the techniques described in the “Tools for Gene Therapy” section to outline potential combinations of therapeutic agent and delivery methods that have been successful or may be successful in the future. These methods can be grouped generally into two main categories: introducing genes toxic to cancer cells, such as tumour suppressor genes; and shutting off oncogene expression in malignant tissues, as shown in Figure 2.
Killing Tumor Cells
One method of killing cancer cells without harming healthy tissues is to use viral vectors, liposomes, or hydrogels to selectively deliver a toxic gene. Exciting results have been achieved using viral vectors to exploit different vulnerabilities in cancer cells. For instance, Gendicine, the first gene therapy in the world to be approved for clinical use, is an adenoviral vector which delivers a recombinant tumour suppressor gene (Chen et al., 2014; Pearson, Jia, & Kandachi, 2004). Inactivation of the tumour suppressor, p53, is a critical step in the development of many types of cancers (Chen et al., 2014; Lang et al., 2003). Therefore, application of Gendicine can be used to reactivate p53 and induce programmed cell death in cancer cells (Chen et al., 2014). Additionally, researchers found that Shigatoxin1A1 adenovirus vectors could be used to deliver gene products that are trans-spliced together (using SmaRT technology) inside of cells to encode a gene with a promoter activated only in cancerous cells (Nakayama, Pergolizzi, & Crystal, 2005). The trans-spliced mRNA encoded a toxin which induced apoptosis in cancer cells (Nakayama et al., 2005). Since the viral vector delivered two DNA fragments coding for 5′ and 3′ fragments of pre-mRNA of Shigatoxin1A1 instead of one DNA fragment coding for the entire gene, vectors could be grown in cells without the toxin killing the hosts (Nakayama et al., 2005).
Liposomes have fewer side effects than viruses, but are also less stable. Suzuki et al (2010) used liposomes to deliver the immunotherapeutic cytokine gene Interleukin-12 (IL-12) to tumour sites. The transgene was introduced systemically within ‘bubble liposomes’, which can be collapsed by ultrasound. Tumour cells were sonicated to collapse the liposomes in the area surrounding the malignant cells, ensuring that the therapeutic gene was introduced to the tumour specifically. The treatment caused cancerous tissue to experience inhibited protein production and inhibited growth (Suzuki et al., 2010). This is an example of a ‘remote triggering system’ which allows liposomes to release their contents only when near the tumour; other methods include light-sensitive and magnetically-responsive liposomes (Suzuki et al., 2010). However, liposomes normally have low efficacy because they degrade easily and often release their contents before reaching the tumour cells (Mufamadi et al., 2011).
New gene therapy techniques have also been applied to the challenge of treating cancer and have produced promising results. For example, hydrogels have been used to deliver therapeutic genes. In order to treat bone cancer, a thermal and pH sensitive chitosan hydrogel has been reported to successfully treat mice by delivering the small molecule Doxorubicin (Ta, Dass, Larson, Choong, & Dunstan, 2009). Doxorubicin can be toxic to healthy cells as well as malignant ones, but the advantage of the hydrogel was that different formulations could be composed to tailor the release of this chemical at cancerous tissues (Ta et al., 2009). In future trials, hydrogels could represent an ideal replacement for viral delivery because of their low toxicity (Seidlits et al., 2014). One attractive proposal would be to incorporate liposomes inside a hydrogel. This formulation has been shown to allow a controlled, yet stable drug release (Mufamadi et al., 2011).
Lastly, gene therapy can be used in conjunction with either chemotherapy or radiotherapy to improve their efficacy. An example of this is delivery of genetically modified hematopoietic stem cells (HSCs). HSCs in the bone marrow are often killed off by chemotherapy, causing severe side effects for the patient (Sorrentino, 2002). By introducing genetically modified, chemotherapy-resistant HSCs, the dose of chemotherapy can safely be increased. A recent study used this method to target glioblastomas, cancers which are often resistant to chemotherapy (Adair et al., 2012). Three patients were treated with autologous hematopoietic stem cells transduced with a retroviral vector to overexpress the gene P140K, which confers resistance to the chemotherapeutic drug O6-benzylguanine (6BG). The number of hematopoietic stem cells in all three patients dropped after each cycle of chemotherapy; however, each decrease was followed by an increase in the stem cell count, demonstrating that the population of hematopoietic stem cells was able to recover. An increase in P140K-modified cells was observed following each cycle of chemotherapy, suggesting that the introduction of P140K was responsible for conferring chemoprotection, demonstrating the potential of this combinatorial approach (Adair et al., 2012).
Targeting Oncogene Expression
Another important aim in developing cancer therapy is to knock out mutant oncogenes by repressing their expression or removing them from the genome. Gene therapy treatments for correcting genetic diseases offer interesting insight.
Several of the gene therapy tools designed to alter harmful genes have only been tested in animal models but have significant therapeutic potential. The first demonstration of CRISPR as a therapeutic strategy in vivo corrected a diseased phenotype in a mouse model of hereditary tyrosinemia, a disease caused by error of metabolism (Yin et al., 2014). A gene coding for the Cas9 nuclease, a single guide RNA, and a donor oligonucleotide were injected into the tail of the mouse, resulting in reconstitution of the wild type gene in mouse hepatocytes. Only about 0.4% of cells were corrected, a result which was likely due to low transfection efficiencies (Yin et al., 2014). Preclinical trials involving other methods of genome editing have also been successful. A novel combination of techniques was recently used to tackle recessive dystrophic epidermolysis bullosa, a disease caused by a single point mutation to the COL7A1 gene on chromosome three (Osborn et al., 2013). This point mutation usually results in a lack of type VII collagen protein, which leads to fatal skin blistering. Osborn et al. (2013) showed that co-delivery of TALEN DNA and an oligonucleotide donor to patient fibroblasts was able to rescue type VII collagen production. Further, the gene-edited fibroblasts were induced to return to the pluripotent state and then injected into mice, where they went on to produce skin-like structures as healthy, differentiated cells would (Osborn et al., 2013).
The new genome editing methods using TALENs and the CRISPR/Cas9 system show promise in pre-clinical trials when combined with viral vectors for delivery. Adenoviral vectors can carry a large enough amount of genetic material (‘genetic payload’) to introduce DNA encoding for TALENS or CRISPR oligonucleotides into the target cells, allowing cells to be transduced with much greater efficiency. A recent study has shown that adenoviral vectors can be used to transduce a range of human cells in vitro with genes coding for Cas9 nuclease and single guide RNA molecules, resulting in genome modifications at the targeted loci (Maggio et al., 2014). These modifications were achieved with an efficiency of 18-65 percent, depending on the tissue type. Adenoviral vectors have also been used to successfully achieve gene transfer of TALENs into human cells in vitro (Holkers et al., 2013)
Gene therapy has incredible potential for treating cancer because it can be used to target cancer cells on the basis of genetic defects rather than rapid proliferation, making treatments much more specific to tumour cells. This means that gene therapy could be used to treat cancer without the drastically harmful side effects of chemotherapy and radiotherapy, and can destroy metastases and micro-tumours which cannot be removed surgically. However, emerging gene therapies must overcome significant challenges before they can be used to treat cancer patients on a large scale.
Early gene therapy trials such as the SCID-XI trial in 2000 have called into question the safety of gene therapy, particularly the use of retroviral vectors (Wu & Dunbar, 2011). The failure of the SCID-XI trial showed that retroviral vectors can cause off-target mutations in the host cells, leading to uncontrolled replication and tumour formation. This is a concern for some applications of gene therapy to treat cancer, particularly the use of drug-resistant stem cells to mitigate the harmful effects of chemotherapy: if the modified stem cells start proliferating at an uncontrolled rate, the patient will experience the formation of a new, drug-resistant tumour (Wu & Dunbar, 2011). The study by Adair et al. (2012) successfully used retroviral vectors to transduce hematopoietic stem cells and confer a drug-resistance gene. To reduce the risk of mutagenesis, retroviral vectors could be replaced by adenoviral vectors carrying a Cas9 nuclease and gRNA. This approach would combine the high transduction efficiency of viral vectors with the specificity of CRISPR, thereby preventing unwanted mutations which could lead to harmful results.
Another risk of viral vectors is that genetically engineered viruses can trigger an immune response in the patient that destroys the therapeutic vectors. Tait et. al. (1999) showed that viral vectors can be used to destroy tumours, but only if the vectors can evade the immune system long enough to infect the cancer cells. Therefore, viral vectors can be a useful tool for treating immunocompromised patients such as those who have undergone many rounds of chemotherapy. Patients with healthy immune systems may benefit more from alternate methods of gene delivery, such as liposomes, tissue particle bombardment or electroporation. These methods can be used to deliver vectors without triggering an immune response, but are much less efficient than viral vectors. One promising method of increasing the efficiency of liposomes is to combine hydrogels and liposomes to make the vectors more stable. Similarly, hydrogels could potentially be used in conjunction with electroporation to ensure that the vectors transfect as many cells as possible. Electroporation and tissue particle bombardment are most useful for local application of gene therapy, whereas liposomes could be used for systemic application.
Previous experiments with liposomes, electroporation and tissue particle bombardment have mainly been conducted with these delivery methods transferring DNA constructs which are expressed as episomes (Mufamadi et al., 2011; Yang et al., 1990; Yarmush et al., 2014). Therefore, these trials have encountered the issue of transiency: once transfected, the host cells only express the transgene for a short amount of time. This can be a problem if, for example, the aim is to permanently shut down expression of an oncogene. One solution to this problem could be to combine these delivery methods with CRISPR or TALENs, which edit the genomic DNA of the host cells and therefore can induce long-lasting gene expression or knockdown. Yin et al. (2014) demonstrated that while CRISPR can be used to correct gene expression, simply injecting the Cas9 nuclease, guide RNA, and donor oligonucleotide did not result in a high enough transfection efficiency to treat disease in humans. Delivering the necessary components with liposomes, however, could increase the transfection efficiency enough to effectively correct disease phenotypes. The combination of delivery methods such as liposomes and hydrogels with therapeutic agents such as CRISPR and TALENs has the potential to be a powerful tool for gene therapy. Alternatively, combining technologies such as CRISPR with viral delivery systems could result in a highly specific and efficient method for genome editing.
Gene therapy is a relatively new but very promising solution to one of the most important challenges in the field of biotechnology today: the treatment of cancer. Using the gene therapy toolbox outlined in this review, it is possible to specifically target cancer cells by delivering toxic genes to tumour cells, or by altering the expression of oncogenes. Many of the methods in the toolbox have been demonstrated in clinical or preclinical trials to be specific, effective and long lasting inside the complex system of the body. These trials have shown that these methods already have a great deal of control over the types of genes that can be modified and how those genes can be modified. However, technologies such as CRISPR and SmaRT must still be put to the test in clinical trials. There are many barriers still to be overcome before gene therapy can become successful and easily applied to disease. With time, perhaps gene therapy can become a healthier alternative or complement to the radiotherapy and chemotherapy that is currently the primary cancer fighting tactic.
Adair, J. E., Beard, B. C., Trobridge, G. D., Neff, T., Rockhill, J. K., Silbergeld, D. L., Mrugala, Maciej M., Kiem, H.-P. (2012). Extended survival of glioblastoma patients after chemoprotective HSC gene therapy. Science Translational Medicine, 4(133), 133ra57.
Allen, T. M., & Cullis, P. R. (2013). Liposomal drug delivery systems: from concept to clinical applications. Advanced Drug Delivery Reviews, 65(1), 36–48.
American Cancer Society. (2016). Cancer facts & figures 2016.
Braun, C. J., Boztug, K., Paruzynski, A., Witzel, M., Schwarzer, A., Rothe, M., Modlich, U., Beier, R., Göhring, G., Steinemann, D., Fronza, R., Ball, C. R., Haemmerle, R., Naundorf, S., Kühlcke, K., Rose, M., Fraser, C., Mathias, L., Ferrari, R., Abboud, M. R. Al-Herz, W., Kondratenko, I., Maródi, L., Glimm, H., Schlegelberger, B., Schambach, A., Albert, M. H., Schmidt, M., von Kalle, C., Klein, C. (2014). Gene therapy for Wiskott-Aldrich syndrome–long-term efficacy and genotoxicity. Science Translational Medicine, 6(227), 227ra33.
Caló, E., & Khutoryanskiy, V. V. (2014). Biomedical applications of hydrogels: a review of patents and commercial products. European Polymer Journal, 65, 252–267.
Cartier, N., Hacein-Bey-Abina, S., Bartholomae, C. C., Bougnères, P., Schmidt, M., Kalle, C. V., Fischer, A., Cavazzana-Calvo, M., Aubourg, P. (2012). Lentiviral hematopoietic cell gene therapy for X-linked adrenoleukodystrophy. Methods in Enzymology, 507, 187–98.
Chen, G.X., Zhang, S., He, X.H., Liu, S.Y., Ma, C., & Zou, X.P. (2014). Clinical utility of recombinant adenoviral human p53 gene therapy: current perspectives. OncoTargets and Therapy, 7, 1901–9.
Dow, L. E., Fisher, J., O’Rourke, K. P., Muley, A., Kastenhuber, E. R., Livshits, G., Tschaharganeh, D. F., Socci, N. D., Lowe, S. W. (2015). Inducible in vivo genome editing with CRISPR-Cas9. Nature Biotechnology, 33(4), 390–394.
Gaj, T., Gersbach, C. A., & Barbas, C. F. (2013). ZFN, TALEN, and CRISPR/Cas-based methods for genome engineering. Trends in Biotechnology, 31(7), 397–405.
Gori, J. L., Hsu, P. D., Maeder, M. L., Shen, S., Welstead, G. G., & Bumcrot, D. (2015). Delivery and Specificity of CRISPR-Cas9 Genome Editing Technologies for Human Gene Therapy. Human Gene Therapy, 26(7), 443–51.
Hacein-Bey-Abina, S., Garrigue, A., Wang, G. P., Soulier, J., Lim, A., Morillon, E., Clappier, E., Caccavelli, L., Delabesse, E., Beldjord, K., Asnafi, V., MacIntyre, E., Dal Cortivo, L., Radford, I., Brousse, N., Sigaux, F., Moshous, D., Hauer, J., Borkhardt, A., Belohradsky, B. H., Wintergerst, U., Velez, M. C., Leiva, L., Sorensen, R., Wulffraat, N., Blanche, S., Bushman, F. D., Fischer, A., Cavazzana-Calvo, M. (2008). Insertional oncogenesis in 4 patients after retrovirus-mediated gene therapy of SCID-X1. The Journal of Clinical Investigation, 118(9), 3132–42.
Hanahan, D., & Weinberg, R. A. (2011). Hallmarks of cancer: the next generation. Cell, 144(5), 646–74.
Hockemeyer, D., Wang, H., Kiani, S., Lai, C. S., Gao, Q., Cassady, J. P., Cost, G. J., Zhang, L., Santiago, Y., Miller, J. C., Zeitler, B., Cherone, J. M., Meng, X., Hinkley, S. J., Rebar, E. J., Gregory, P. D., Urnov, F. D., Jaenisch, R. (2011). Genetic engineering of human pluripotent cells using TALE nucleases. Nature Biotechnology, 29(8), 731–4.
Holkers, M., Maggio, I., Liu, J., Janssen, J. M., Miselli, F., Mussolino, C., Recchia, A., Cathomen, T., Gonçalves, M. A. F. V. (2013). Differential integrity of TALE nuclease genes following adenoviral and lentiviral vector gene transfer into human cells. Nucleic Acids Research, 41(5), e63.
Kaliberov, S. A., & Buchsbaum, D. J. (2012). Chapter seven–Cancer treatment with gene therapy and radiation therapy. Advances in Cancer Research, 115, 221–63.
Kalli, C., Teoh, W. C., & Leen, E. (2014). Introduction of Genes via Sonoporation and Electroporation. Advances in Experimental Medicine and Biology, 818, 231–54.
Kitagawa, T., Iwazawa, T., Robbins, P. D., Lotze, M. T., & Tahara, H. (2003). Advantages and limitations of particle-mediated transfection (gene gun) in cancer immuno-gene therapy using IL-10, IL-12 or B7-1 in murine tumor models. The Journal of Gene Medicine, 5(11), 958–65.
Kootstra, N. A., & Verma, I. M. (2003). Gene therapy with viral vectors. Annual Review of Pharmacology and Toxicology, 43, 413–39.
Lang, F., Bruner, J., Fuller, G., Aldape, K., Prados, M., Chang, S., Berger, M., McDermott, M., Kunwar, S., Junck, L., Chandler, W., Zwiebel, J., Kaplan, R., Yung, A. (2003). Phase I Trial of Adenovirus-Mediated p53 Gene Therapy for Recurrent Glioma: Biological and Clinical Results. Journal of Clinical Oncology : Official Journal of the American Society of Clinical Oncology, 21(13), 2508–2518.
Maggio, I., Holkers, M., Liu, J., Janssen, J. M., Chen, X., & Gonçalves, M. A. F. V. (2014). Adenoviral vector delivery of RNA-guided CRISPR/Cas9 nuclease complexes induces targeted mutagenesis in a diverse array of human cells. Scientific Reports, 4, 5105.
Malik, P., & Arumugam, P. I. (2005). Gene Therapy for beta-thalassemia. Hematology / the Education Program of the American Society of Hematology. American Society of Hematology. Education Program, 45–50.
Marraffini, L. A., & Sontheimer, E. J. (2010). CRISPR interference: RNA-directed adaptive immunity in bacteria and archaea. Nature Reviews. Genetics, 11(3), 181–90.
McCormack, M. P., Young, L. F., Vasudevan, S., de Graaf, C. A., Codrington, R., Rabbitts, T. H., Jane, S., Curtis, D. J. (2010). The Lmo2 oncogene initiates leukemia in mice by inducing thymocyte self-renewal. Science (New York, N.Y.), 327(5967), 879–83.
McCormick, F. (2001). Cancer gene therapy: fringe or cutting edge? Nature Reviews. Cancer, 1(2), 130–41.
Mufamadi, M. S., Pillay, V., Choonara, Y. E., Du Toit, L. C., Modi, G., Naidoo, D., & Ndesendo, V. M. K. (2011). A review on composite liposomal technologies for specialized drug delivery. Journal of Drug Delivery, 2011, 939851.
Nakayama, K., Pergolizzi, R. G., & Crystal, R. G. (2005). Gene transfer-mediated pre-mRNA segmental trans-splicing as a strategy to deliver intracellular toxins for cancer therapy. Cancer Research, 65(1), 254–63.
Osborn, M. J., Starker, C. G., McElroy, A. N., Webber, B. R., Riddle, M. J., Xia, L., DeFeo, A. P., Gabriel, R., Schmidt, M., von Kalle, C., Carlson, D. F., Maeder, M. L., Joung, J. K., Wagner, J. E., Voytas, D.F., Blazar, B. R., Tolar, J. (2013). TALEN-based gene correction for epidermolysis bullosa. Molecular Therapy : The Journal of the American Society of Gene Therapy, 21(6), 1151–9.
Pearson, S., Jia, H., & Kandachi, K. (2004). China approves first gene therapy. Nature Biotechnology, 22(1), 3–4.
Qi, L. S., Larson, M. H., Gilbert, L. A., Doudna, J. A., Weissman, J. S., Arkin, A. P., & Lim, W. A. (2013). Repurposing CRISPR as an RNA-Guided Platform for Sequence- Specific Control of Gene Expression, 152(5), 1173–1183.
Seidlits, S. K., Gower, R. M., Shepard, J. A., & Shea, L. D. (2014). Hydrogels for Lentiviral Gene Delivery. Expert Opinions on Drug Delivery, 10(4), 499–509.
Siegel, R. L., Miller, K. D., & Jemal, A. (2015). Cancer statistics, 2015. CA: A Cancer Journal for Clinicians, 65(1), 5–29.
Sorrentino, B. P. (2002). Gene therapy to protect haematopoietic cells from cytotoxic cancer drugs. Nature Reviews. Cancer, 2(6), 431–41.
Suzuki, R., Namai, E., Oda, Y., Nishiie, N., Otake, S., Koshima, R., Hirata, K., Taira, Y., Utoguchi, N., Negishi, Y., Nakagawa, S., Maruyama, K. (2010). Cancer gene therapy by IL-12 gene delivery using liposomal bubbles and tumoral ultrasound exposure. Journal of Controlled Release : Official Journal of the Controlled Release Society, 142(2), 245–50.
Ta, H. T., Dass, C. R., Larson, I., Choong, P. F. M., & Dunstan, D. E. (2009). A chitosan-dipotassium orthophosphate hydrogel for the delivery of Doxorubicin in the treatment of osteosarcoma. Biomaterials, 30(21), 3605–13.
Tait, D. L., Obermiller, P. S., Hatmaker, A. R., Redlin-frazier, S., & Holt, J. T. (1999). Ovarian Cancer BRCA1 Gene Therapy : Phase I and II Trial Differences in Immune Response and Vector Stability Ovarian Cancer BRCA1 Gene Therapy : Phase I and II Trial Differences in Immune Response and Vector Stability 1. Clinical Cancer Research, 5, 1708–1714.
Urruticoechea, A., Alemany, R., Balart, J., Villanueva, A., Viñals, F., & Capellá, G. (2010). Recent advances in cancer therapy: an overview. Current Pharmaceutical Design, 16(1), 3–10.
Waehler, R., Russell, S. J., & Curiel, D. T. (2007). Engineering targeted viral vectors for gene therapy. Nature Reviews. Genetics, 8(8), 573–87.
Wilkinson, R., & Wiedenheft, B. (2014). A CRISPR method for genome engineering. F1000prime Reports, 6, 3.
Wu, C., & Dunbar, C. E. (2011). Stem cell gene therapy: the risks of insertional mutagenesis and approaches to minimize genotoxicity. Frontiers of Medicine, 5(4), 356–71.
Yang, N. S., Burkholder, J., Roberts, B., Martinell, B., & McCabe, D. (1990). In vivo and in vitro gene transfer to mammalian somatic cells by particle bombardment. Proceedings of the National Academy of Sciences of the United States of America, 87(24), 9568–72.
Yang, Y., & Walsh, C. E. (2005). Spliceosome-mediated RNA trans-splicing. Molecular Therapy : The Journal of the American Society of Gene Therapy, 12(6), 1006–12.
Yarmush, M. L., Golberg, A., Serša, G., Kotnik, T., & Miklavčič, D. (2014). Electroporation-Based Technologies for Medicine: Principles, Applications, and Challenges. Annual Review of Biomedical Engineering, 16, 295–320.
Yin, H., Xue, W., Chen, S., Bogorad, R. L., Benedetti, E., Grompe, M., Koteliansky, V., Sharp, P. A., Jacks, T., Anderson, D. G. (2014). Genome editing with Cas9 in adult mice corrects a disease mutation and phenotype. Nature Biotechnology, 32(6), 551–3.
Access Full PDF: Gene Therapy: A Toolkit for Targeting Cancer