This article provides an overall review of the current status of the clinical applications of gene therapy and evaluates the clinical success of ex vivoand in vivo gene therapy strategies.
Gene therapy is the introduction of new genetic material into a cell as a means to correct a known mutation that causes a disease. Viral vectors are viruses with new or modified genomes that remove the virus' pathogenicity. By replacing pieces of a viral genome with a known genetic sequence, harnessing the viral lifecycle allows the capability to carry the new viral sequence across the cell membrane and bring the modified genome in contact with a defective cellular genome. In order to be successful at transforming the defective genome, viral vectors must overcome physical barriers and immune responses.
In this study, the various floral species of Iture beach were surveyed for seaweed composition, floral zonation and diversity of associated epifauna. No such study has been conducted for specific Ghanaian rocky shores such as Iture beach, although more generalized studies are available concerning the coastal ecology of Ghana as a whole.
Over the next twenty years, the financial burden on the U.S. government to provide quality healthcare to its aging population is expected to rise to $4.2 trillion (U.S. Centers, 2006). One way the U.S. government could reduce the financial impact of this aging demographic is through funding research into the causes of aging such as telomeres. This work presents a method for calculating lifespan based on the rate of telomere shortening.
Flavonoids are natural antioxidants derived from plants and commonly found in foods, such as fruits and vegetables, with the ability to sequester free radicals. Quercetin, part of a subclass of flavonoids called flavonols, has received considerable attention because of its overwhelming presence in foods. Quercetin is believed to protect against several degenerative diseases by preventing lipid peroxidation. However, the degree and method of quercetin's absorption in vivo has yet to be absolutely determined. It is thought that the predominant glucoside form is converted to the aglycone, which is then converted to one of several quercetin metabolites.
This study focused on the variability of temperature across the Oklahoma City metropolitan area (OKC) during a six-week period in early 2007. Twenty portable temperature and humidity sensors were strategically placed at various fire stations throughout Oklahoma City from 20 February through 1 April 2007. Data from these sensors and two local Oklahoma Mesonet sites were used to evaluate the spatial differences in temperature for the area.
In experimental studies of phase relations in chemical, ceramic, metallurgical, and mineralogical systems, it is fairly rare for pressure-temperature (P-T) diagrams to be fully mapped, i.e. with all the univariant lines directly determined. This typically requires an extraordinarily large number of experiments; in many cases this is impractical or, due to extreme temperature or pressure conditions, sluggish kinetics, or other considerations, effectively impossible. Even if the availability of thermodynamic data allows the slopes of such univariant curves to be calculated, there remains a possibility that such data are not always mutually consistent. Fortunately, in a series of classic monographs published in Dutch between 1915 and 1925, F.A.H Schreinemakers derived and demonstrated the usefulness of a set of rules which is aptly suited to overcoming this problem. When some subset of the n+2 univariant lines that meet at an invariant point in an n-component system are known, this set of topologically-governed principles, which later came to be known as Schreinemakers rules, not only allows the determination of the location of remaining univariant lines, it also provides insights into the stability of divariant assemblages around the invariant point at various temperatures and pressures. In this paper, we review the 180° rule, overlap rule and half-plane rule, all of which make up Schreinemakers rules, and show how they can be applied to a ternary non-degenerate system where five phases coexist at an invariant point.
Trypanosoma brucei is the causative pathogen for the fatal human disease African sleeping sickness. For over 160 years, cellular propulsion in Trypanosoma brucei has been considered to be in an auger-like motion and current understanding of the motility states the cell moves using left-helical waves which propagate along the flagellum. In contrast to the uniform flagellar beats laid out by the traditional model, we find that the frequency was lower at the posterior end compared to the anterior end, suggestive of an alternative and unique mechanism of motility exhibited by Trypanosoma brucei. Our new studies are significant in pioneering a new direction and providing important insight into the actual model of Trypanosma brucei’s movement. As Trypanosome motility is central to disease pathogenesis, parasite development and disease transmission, this investigation provides a requisite step in efforts to exploit cell motility as a target for disease control in African sleeping sickness.
Sensorimortor gating and locomotion are behaviors that reflect pre-attentive sensory filtering and higher order, top-down, sensory processing, respectively. These processes are thought to affect either the perception of novelty in an environment (filtering) or cognition (higher order processing), salient features of models of altered states of consciousness (ASC). Drugs with highly selective receptor affinities that produce ASC can help to establish neural correlates, pathways, and mechanisms underlying ASC. Furthermore, screening for substances that selectively reverse drug-induced sensory processing departures is valuable for development of experimental antipsychotics. This study investigated the anomalous opioid sub-type, the kappa opioid (KA) system, within the two ASC models.
This work describes an explosives detection technique utilizing differential reflectograms to identify the characteristic differential reflectivity spectra of TNT. It accomplishes this by measuring the characteristic differential reflectivity (essentially the absorption) of a specimen while being exposed to high intensity UV light.
Effect of Monday-Thursday Fasting on Working Memory of Adult Human
by: Dyna Rochmyaningsih
Institution: Laboratory of Zoology, Bogor Agricultural University, Indonesia
Date: August 2009
Working memory is the ability in the brain to temporarily hold and process acquired information. To perform memory task, human brain needs sufficient glucose pertaining to its energy demand. Thus, fasting, as a temporary condition of no glucose intake into the body, would logically give negative effect on memory performance. This assumption is supported by the finding of untreated patients of hypoglicemia (low blood glucose) that has syndrome of confusion and may have trouble of concentrating. Previous study also showed that overnight and morning fasts produced slower memory recall on schoolchildren in United States and Peru and also for the college students in United Kingdom. It was proposed that metabolic stress and the importance of glucose in the making of acetylcholin were the reasons of the effects. To the contrary, previous study in Indonesia showed that daily fast did not influence the memory of college students but improve the memory performance of teenagers. Thus, following these adverse conclusions, this study aims to find out whether or not fasting influence memory performance of adult human. In this experiment, blood glucose concentration of each subject was electrochemically measured using glucometer while their visual working memory was tested using sequential Delayed Matching to Sample (DMS) task. In this task, subjects were asked to match a number of visual stimuli and the order of its appearance corresponding to the level of the test. To hold the memory of the stimulus, 1500 ms delay period was given in each trial. After 60 trials, data of the reaction time and accomplished level were extracted from the test and would be further analyzed using regression analysis. This test was done in the fasting and the non-fasting conditions for each subject. Result showed that blood glucose level were high around 2.5 hours (110-120 mg/dl), decreased within 6 hours, and then remained constant until 24 hours (80-100 mg/dl). This homeostasis phenomenon was suggested to be the reason why the fasting duration did not influence memory performances both in reaction time and accomplished level. Fasting duration did not significantly influence reaction time with P-value 0.438 in the non-fasting condition and 0.966 in the fasting condition (P>0.05). It also did not significantly influence accomplished level with P-value of 0.432 in the non-fasting condition and 0.642 in the fasting condition (P>0.05). This experiment did not record hunger which was suspected to affect attentional memory processes. Moreover, learning effect and habituation are more considered to affect working memory performance.
The glucocorticoid (GC) known as cortisol is secreted from the adrenal gland when someone is exposed to stress from their internal or external environment. Cortisol modulates various physiological functions during stress to help one adapt; however, it also has an effect on cognitive functions. One cognitive function affected by cortisol, with an increasing interest among scientists, is memory. Various experiments conducted with the goal in mind of finding cortisol's effect on memory, yield contradictory results. Some of the experiments show that cortisol weakens a person's ability to retrieve emotional memory, and enhances their ability to retrieve neutral (Kuhlmann et al., (2005); Tollenaar et al., (2008). Others show cortisol enhances a person's ability to retrieve emotional memory, and impairs their ability to retrieve neutral (M. Jelicic et al., (2004); P. Putman et al., (2004). This review paper focuses on analyzing these experiments and their results, to help expose methodological flaws and, as a consequence, give insight on how to improve future experiments to achieve more valid results. This collective analysis further corroborates the complex functioning of cortisol and how it affects one's ability to recall information of varying emotional valence while undergoing different levels of stress.
In 1997, Stanley Prusiner was awarded the Nobel Prize for his prion hypothesis, which is still used today to describe the propagation of some of the deadliest diseases that affect the central nervous system. However, the prion hypothesis remains controversial, as it does not adequately describe the mechanism by which infection occurs, inheritance of diseases such as CJD, or variations among the infected proteins (i.e. "prions"). I propose a counter argument to the prion hypothesis, the "mutagen hypothesis," to explain the uncertain aspects of pathogenesis of these diseases. This new hypothesis is supported by and explains findings from previous experiments performed by other scientists findings that the prion hypothesis failed to explain.
In the present study Southern hybridization analysis of Leishmania. donovani gDNA was performed to generate a restriction map of the LdLIP3 gene loci and to determine its copy number in the genome. The gDNA was digested with restriction endonucleases and subjected to agarose gel electrophoresis. Restriction fragments were visualized with ethidium bromide staining and subjected to Southern hybridization with a digoxigenin labeled probe of the full length LdLIP3 gene. Results indicated that the gene is present as single copy. Moreover, our data supports the hypothesis that L. donovani possess a gene that encodes a secreted lipase. To date, there are no published studies concerning the characterization of any lipase genes in Leishmania thus this research provides some of the tools necessary to better understand the role of lipases to the biology and malignance of this important human pathogen.
It is an old rhyme that describes the situation in the 1918, during the first wave of influenza pandemic. What started as a mild common cold soon spread worldwide, killing an estimated 40 million people and shattering the previous notion that the flu was not a mortal illness (Pandemic). The results were devastating. The pandemic killed more people than World War Ι, and more people than 4 years of the Bubonic plague. (Molly Billings, 1997) Since, two other influenza pandemics have occurred: the 1957 Asian Influenza and the 1968 Hong Kong Influenza. While both were terrible, the 1918 influenza has been the only flu pandemic associated with mortality measured in 1000 per 100,000 population ( Belshe 2005).
India accounts for nearly 20% of maternal and child health services in the globe. This could be related to several factors such as non utilization or under utilization of maternal and child health (MCH) services. For proper programme implementation, understanding community knowledge and practices regarding maternal care during pregnancy, delivery, and postnatal period is required. To assess the utilization of MCH services, a cross sectional community based study was conducted in a sub-centre area of the Udupi District. A pre-tested questionnaire was administered to 185 recently delivered mothers and the data analyzed using SPSS (version 13.0) package. 90% of the women had at least 3 antenatal visits and 96% of them had institutional deliveries. 89% of the women were aware of the different contraceptive methods available. Around 37% of the under-fives were malnourished. Despite the findings, improving community awareness on maternal and child health (MCH) services is still required.
Parkinson's disease is a progressive neurodegenerative disease caused by the death of midbrain dopaminergic neurons. The misfolding and aggregation of the protein α-synuclein and accumulation of oxidants is thought to underlie the cell death, but how α-synuclein becomes toxic is unclear. Using fission yeast as a model organism, where α-synuclein expression induces aggregation but not toxicity, we tested the hypothesis that α-synuclein toxicity can result from the combination of protein aggregation and oxidative stress. Surprisingly, we found that both oxidants (hydrogen peroxide and cumene hydroperoxide) tested in our study resulted in α-synuclein-independent toxicity. Nevertheless, this work illustrates the usefulness of yeasts in evaluating genetic factors and environmental factors like oxidative stress to potentially regulate α-synuclein toxicity linked to Parkinson's disease.
The researchers wanted to find a medicinal plant that could be used in the pharmaceutical world as a contraceptive due to women looking for a safer and more effective one. According to Rainforest Remedies by Dr. Michael Balick and Rosita Arvigo (1998), Mayan women would make Cedrela odorata bark into tea and drink it three times a day for three days in order to cause sterility for one cycle. Hence, the researchers felt that this needed to be researched further to determine if it could become the contraceptive women were looking for. To assess the efficacy of Cedrela odorata as an affective contraceptive, (CD-1) female mice were administered tea made from the bark three times a day for three consecutive days, followed by copulation with male mice. The tea intercepted 92% of the pregnancies in female mice among the three experimental groups when administered orally for three days with three doses of 0.15 milliliters. All of the control group mice became pregnant when administered orally for three days with three doses of 0.15 milliliters of distilled water. No mortality and changes in the behavior were observed from pre-dose days to the postcoitum days in all the control and experimental groups. The two-tailed P value ≤ 0.0001 compared the number of pregnancies of the six experimental groups to the number of pregnancies of the two control groups. Findings demonstrated that there was a significant contraceptive efficacy of the Cedrela odorata bark made into tea at the dose of 0.15 milliliters. More research is needed to determine the length of sterility and the mechanism of the bark. This research will help meet the increasing need for population control due to no clear improvement in the effectiveness in contraceptives between 1995 and 2002 (Kost et al. 2008). Being of plant origin would allow for a relatively cheap and effective contraceptive that could become widely available and accepted.
Relatively minimal work has examined the extent to which sad and happy emotional detectors nonconsciously influence executive processes and if these mechanisms affect water appraisal and water consumption behavior. This research studies if and the degrees to which sad and happy emotional detectors affect water appraisal, water consumption and attention. The participants were asked to locate and report an asterisk's location that was superimposed in subliminal pictures of happy or sad people presented on a computer, pour water from a 1000 mL container into a 473 mL cup, consume as much of the 1000 mL of water as desired, and provide ratings of the water on a questionnaire. The one tailed independent samples t-tests revealed that the participants in the "happy" condition detected more (p.05) causal relationships and interactions, respectively, between the experimental conditions and appraisal and behavioral processes regarding water. These findings suggest that humans may be unable to devote their full attention to a task when their "sad" emotional detectors are activated. Also, our "happy" and "sad" emotional detectors may not be able to influence water consumption or appraisals.
Recent clinical findings suggest changes in vasculature compliance may be responsible for abnormal brain dynamics in diseases like hydrocephalus. Understanding and treating pathological brain dynamics requires a quantitative understanding of the complex interaction between pulsating vasculature, cerebrospinal fluid, and brain tissue. Models addressing anatomically correct geometry, physiological haemodynamics and complete interactions of vasculature and brain tissue are required for this purpose. In this article, a geometrical model of the cerebral vasculature is presented as a first step in the development of a fully distributed mathematical model for quantitative analysis of intracranial dynamics. We present two- and three- dimensional models of the human cerebral vasculature network. The model was generated in two phases. First, the major extracerebral arteries were reconstructed using patient-specific MRI images. Then in step two a special modified algorithm of Beard & Bassingthwaighte generated the microvasculature, starting from the major arteries of step one. This fractal-based growth algorithm incorporates vessel and complex domain boundary avoidance to create the vasculature. Significant findings are: (1) MRI imaging was successful in generating patient specific geometry of the brain cerebral arteries including carotid artery, basilar artery, Circle of Willis and vertebral arteries; (2) a microvasculature below the medical imaging resolution was successfully created by the computer algorithm; (3) vessels consistently remained inside the domain boundary and avoided overlap; (4) model capillary density agrees qualitatively with actual human capillary density. However, there were some limitations to the model. The model is not completely consistent with the cerebral vasculature anatomy due to limited MRI resolution and the absence of physiological driving forces for the vessel growth in the algorithm. Future work will focus on acquiring quantitatively accurate capillary density in the model by incorporating growth factors in the algorithm. Work is being done toward incorporating blood vessel branching factors and constrained optimization techniques into the algorithm. Moreover, in the next step blood flow simulations need to be performed to predict blood flow and vessel dilations.
To the Editor: Because physicians deal daily with human suffering, they should be aware about taking the best clinical decisions to mitigate their misery. This premise generates a crucial question: on what should clinicians’ decisions be based to achieve this goal? Lately, as discussed by Rosenberg et al. (1995), there is strong academic consensus that scientific evidence should be the platform that supports medical decisions; and it is precisely in this scenario where Evidence Based Medicine (EBM) is considered a new paradigm for medical practice.
The dengue virus (DENV) with its four unique serotypes is transmitted by the Aedes mosquito vector in tropical countries worldwide. All serotypes can cause illness ranging from asymptomatic, self-limiting flu, Dengue fever (DF), dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS). DHF and DSS are characterized by a sudden increase in vascular permeability due to a unique set of cytokines released by DENV-infected monocytes. The method by which DHF/DSS arises is largely accepted to be due to Antibody Dependent Enhancement (ADE) whereby upon secondary infection with a different serotype, the immune system generates non-neutralizing antibody-antigen complexes that permit a greater number of viruses to enter monocytes via Fc-receptors than DENV alone. These DENV-infected monocytes release cytokines resulting in DHF/DSS, the severe and life-threatening forms of the disease. Although the number of people that develop potentially fatal DHF/DSS is relatively small, the effect of all symptomatic DENV infections on Dengue endemic areas is impetus for the large-scale prevention of viral transmission. Given the serotype specific immunity to DENV infections and the possibility of ADE-induced DHF/DSS, prevention of DF is critical. There is currently no Dengue vaccine for the estimated 2.5 billion people at risk of infection; however vaccines under development are being designed to provide protective immunity against all four DENV serotypes to minimize the devastating consequences of ADE-induced DHF/DSS.
For decades, India's demography has featured an unnaturally low child sex ratio. Much fewer girls than boys have been born, which has given rise to the so-called missing women phenomenon. Most plausibly, this bias can be assumed to be caused by the rational discriminatory behavior of parents who prefer sons over daughters. The present demographic development is not only morally reprehensible; it may also have undesirable socio-economic consequences such as criminal violence and social disorder. This paper provides both theoretical and empirical evidence regarding the causes and effects of the missing women phenomenon. It offers a socio-economic theory of son preference based on gender-specific cost and revenues as well as parents' opportunity of choice. The unwaning importance of dowry payments and the proliferation and affordability of modern preconceptual and prenatal sex selection techniques are suggested to be the key drivers of the skewed sex ratio. Three hypotheses derived from this theory are tested by means of a district-level multiple regression analysis based on 2001 census data. The regression results confirm that the overall economic status is negatively correlated with the child sex ratio. Regarding the relative economic value of females as compared to males, empirical evidence is mixed. As expected, conservative, anti-female socio-cultural attitudes, as proxied by religion, cast, and political party affiliation, are correlated with lower child sex ratios.
Superconducting radio frequency cavities have gained use in accelerator systems for particle physics research. Careful production of the cavities has the greatest influence on their efficiencies as uniform interior surfaces are required for high accelerating gradients. Small variations in the surfaces of these cavities, such as inclusions, voids, and cracks, cause large deficiencies in the accelerating gradients. Processes to remove such deficiencies usually include eddy current scanning, buffered chemical polishing, and electropolishing. These methods do not provide a consistent means of producing a uniform interior surface. The effectiveness of tumbling as a mass finishing technique was analyzed. This process completely removed the weld line. The effects of weld line removal on cavity efficiencies will be examined.
Carbon capture and sequestration (CCS) technologies are currently being researched as a potential component of a global portfolio of technologies to help reduce anthropogenic emissions of carbon dioxide (CO2) to the atmosphere. In China, currently a leading emitter of CO2 and a potentially critical player in future carbon emissions reduction strategies, it is important to evaluate the economic feasibility of CCS to understand its potential for large-scale deployment. This paper describes the development of a high resolution geospatial model to assist in efforts to estimate the construction costs of pipelines for transport of CO2from sources to storage sites. The model assigns relative weights to geographic features throughout mainland China to form a relative prioritization map that may be used to model pipeline routing along paths that are likely to represent the lowest cost paths. The final routing priority map (RPM) differentiates between areas according to their relative cost for routing from sources to sinks. The RPM represents the weighted combination of all overlapping geographic and cultural features included in the model. By using the RPM in conjunction with a routing protocol, grid cells with low priority values (i.e., those for which construction and/or societal costs would be higher) would be avoided in favor of cells with higher priority values, all else equal. This mode of estimating least-cost pipeline routing could represent a significant enhancement to existing methodologies used to estimate CO2 transport costs for CCS in China.
Lithium-ion (Li-ion) batteries are key to the development of Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs). One challenge to overcome that will lead to better-working and longer-lasting batteries for more efficient hybrid vehicles is to change and refine the cathode material. The objective of the research was to create a new cathode material that could overcome the constraints associated with the traditional LiCoO2 cathode by reducing or eliminating the amount of costly cobalt and adding a combination of nickel and manganese to the transition metal layer. The compound Li[Ni0.5Mn0.5]O2 has previously been shown to increase the capacity and energy of the battery while still retaining the layered formation of the LiCoO2. The precursor material, which consisted of either nickel cobalt manganese hydroxide or nickel manganese carbonate, was reacted with sodium carbonate in an oven at varying temperatures. The Na[M]O2, with M being the combination of cobalt, nickel, and manganese, underwent an ion exchange in hexanol with an excess of lithium bromide to obtain Li[M]O2. Initial rate tests examining the performance and life of the batteries have demonstrated that this new cathode material operates slightly better than the current baseline material and has the potential to lead to better batteries for HEVs and PHEVs. Further studies must be done to continue to maximize the performance of these materials.
The Reactor Excursion Leak Analysis Program (RELAP5-3D) is a world class thermal hydraulics safety analysis code developed at the Idaho National Laboratory (INL) to address safety concerns in light water nuclear reactors. The purpose of this project is to implement viscous effects into the current RELAP5-3D code. Viscous effects have been thoroughly studied and implemented in many Computational Fluid Dynamics (CFD) codes. However, because of the original purpose of RELAP5-3D, viscous effects on fluid dynamics were not implemented during initial code development. As demand for coupling RELAP5-3D with CFD codes increases, implementing viscous effects in RELAP5-3D is becoming more important both for calculation accuracy and code coupling stability. The momentum flux equations used in RELAP5 resemble the Navier-Stokes Equations (NSE) but doesn't include the viscous contributions. Therefore, a double central finite difference method is used to discretize second order differentials of the viscous terms in both Cartesian and cylindrical geometries. A total of 20 new terms were introduced and the original RELAP5 wall boundary condition was changed from free slip to no slip. The original code architecture was not changed during any of the coding to avoid introducing new limitations.
he optical and electrical properties of doped solution-deposited and rf sputter-deposited thin metal oxide films were investigated following post deposition pulsed laser irradiation. Solution deposited films were annealed at 450 ºC. Following the heating regiment, the transparent metal oxide films were subjected to 355 nm pulsed Nd:YAG laser irradiation (4 nsec pulsewidth) at fluences between 5 and 150 mJ/cm2. Irradiation times at pulse frequencies of 30 Hz ranged from seconds to tens of minutes. Film densification, index change and a marked increase in conductivity were observed following irradiation in air and under vacuum of Al:ZnO (AZO), Ga:ZnO (GZO), and In:ZnO (IZO) films deposited on silica substrates. Despite the measured increase in conductivity, all films continued to show high transparency on the order of 90% at wavelengths from the band edge well into the near infrared region of the spectrum. Laser energies required for turning on the conductivity of these films varied depending upon the dopant. Irradiations in air yielded resistivity measurements on the order of 16 Ωcm. Resistivities of films irradiated under vacuum were on the order of 0.1 Ωcm. The increase in conductivity can be attributed to the formation of oxygen vacancies and subsequent promotion of free carriers into the conduction band. All irradiated films became insulating again after around 24 hours. Oxygen atoms in air became reduced by electrons in the metal conduction band and diffused into vacancies in the lattice. The rate of this reduction process depends on the type of dopant. This work also sheds light on the damage threshold, correlating the optical properties with the presence of free carriers that have been introduced into the conduction band. All films were characterized by means of UV-VIS-NIR transmission spectroscopy, visible and UV Raman spectroscopy and Hall measurements. Analysis of interference fringes in measured transmission spectra allowed film density and refractive index to be evaluated while the Raman measurements showed an increase in LO mode intensity with respect to the TO mode intensity as the films became more conducting. Results of this study are not only important for the continued development of transparent conducting oxide films that find use in photovoltaic cells and solid state lighting modules, but also provide evidence for the role of free carriers in initiating the laser damage process in these wide bandgap metal oxide films.
The outermost layer of the Continuous Electron Beam Accelerator Facility (CEBAF) Large Acceptance Spectrometer (CLAS) at the Thomas Jefferson National Accelerator Facility consists of an electromagnetic calorimeter (EC). The EC is composed of alternating layers of scintillator strips and lead sheets and covers a large portion of the angular range. The EC is used to detect particles,such as electrons, photons, and neutrons,that meet a certain energy threshold. When a particle enters the EC, it loses some of its energy and creates a shower of light that is picked up by the scintillators. The location and amount of this detected light is then used to help identify where and with what energy a particle entered the EC. However, this system has some inefficiency built into it; for example, its layers of lead absorb some of the light instead of recording it. Since the EC contains some unavoidable inefficiency and particles do not always deposit all of their energy, the energy recorded in the EC needs to be analyzed periodically to determine what corrections need to be made to identify the true energy of incident photons. Creating the correction helps to better identify photons and to better understand the intrinsic inefficiency in the EC. The focus of this project was to create a correction for the energy of photons detected in the EC and then use this correction to understand EC inefficiency. To do this, a reaction was picked that involves only electrons, protons, and neutral pi mesons that decay into two photons. The invariant mass of the photon pairs, which were considered to have originated from the same decay, was then compared to the known theoretical mass of the neutral pi meson. The known mass was used to identify how the energy of photons needed to be corrected for different energy levels. It was found that a correction function could be created to increase the accuracy of photon reconstruction. The correction function that was discovered varies considerably from a previous correction that was implemented in 2006. The new correction will be used to analyze data that contain photons at 0.3 GeV and greater. With this new correction implemented, the efficiency of assigning energy to incident photons in the EC has increased. Also, we used this information to estimate the overall efficiency of the EC in detecting photons,a process that hasn't been performed previously on CLAS.
The crystallization of large octahedral crystals of spinel during the high-level waste (HLW) vitrification process poses a potential danger to electrically heated ceramic melters. Large spinel crystals rapidly settle under gravitational attraction and accumulate in a thick sludge layer that may partially or completely block the glass discharge riser of the melter. The settling of single particles of different sizes and the motion of hindered settling front of different particle volume fraction suspensions were studied in stagnant, transparent-silicone oils at room temperature to help predict the settling behavior of spinel crystals in the riser. The dimensions and terminal settling velocities of single particles were measured using an optical particle-dynamics-analyzer. The data yielded an experimental shape factor for glass beads that differed only 0.73% compared to the theoretical shape factor for a perfect sphere. The experimental shape factor for the spinel crystals was smaller than that of the beads given the larger drag force caused by the larger surface area to volume ratio of the octahedral crystals, but matched the theoretically predicted value to within 10%. In the hindered settling experiments, both the glass bead and spinel suspensions were found to follow the predictions of the Richardson-Zaki equation with higher particle volume fractions settling at a slower rate. Particle concentration profiles obtained from color threshold analysis (CTA) indicated that for a given volume fraction the rate of clarification increases with an increase in settling vessel angle with respect to the vertical as predicted by the Ponder, Nakamura and Kuroda (PNK) model. The Stokes', Richardson-Zaki and PNK equations can adequately predict the accumulation rate of spinel crystals in the vertical or inclined glass discharge riser of HLW melters.
Organisms across the evolutionary scale are equipped with complex and interconnected DNA repair pathways that are regulated by multifunctional proteins. These proteins mediate interactions by conformational changes and protein hand-offs in order to coordinate lesion detection and removal with vital cellular processes such as DNA replication, transcription and recombination. Mutations that disrupt repair protein functioning can lead to genomic instability, developmental and immunological abnormalities, and cancer and aging. Xeroderma pigmentosum group G (XPG) is one such multifunctional protein that plays a critical role in maintaining human genome stability. Point mutations in the XPG gene gives rise to an inherited photosensitive disorder, Xeroderma pigmentosum (XP) and truncation mutations cause the profound neurological and developmental disorder Cockayne syndrome (CS) combined with XP. The molecular basis of XPG in XP is well understood because XPG contains structure specific 3ˈ endonuclease activity that is critical to the repair of ultraviolet-damaged DNA in the nucleotide excision repair (NER) pathway. However, the clinical features of CS in XPG-CS patients are difficult to explain on the basis of defects in NER, which suggests that XPG possesses several poorly understood roles that are regulated by its unstructured non-enzymatic recognition (R) and carboxyl (C) terminal domains. These domains have been shown to mediate interactions with over fifteen proteins from multiple repair pathways. Studies conducted on these regions have identified novel scaffolding roles for XPG in transcription-coupled (TCR) and base excision (BER) repair, and recently a replication-associated function with proteins that process damaged replication forks. How XPG is involved in multiple pathways is of considerable interest. Considering the role of the XPG C-terminus in protein-protein interaction, this study involved bacterially expressing and purifying three sequential C-terminal subdomain constructs and screening interactions with seven proteins representing roles in different DNA replication and repair pathways. All seven proteins were found to interact with the same region of the C-terminus, which provides information critical towards identifying the amino acids uniquely required by each protein partner. This will allow one to genetically dissect the molecular basis of XPG in order to elucidate and remedy its involvement in the complex disease phenotype of XP-CS.
All materials have a property of thermal conductivity (k). As a measure of ability to conduct heat energy, k is a valuable number in heat transfer design and analysis. Knowing a material's value of thermal conductivity allows for proper selection in its use. The Applied Engineering Technology group at Los Alamos National Laboratory wishes to measure the thermal conductivity of various solid samples with minimal error. An apparatus was built in an attempt to measure a large range and variety of samples using a method combined from ASTM standards and the writer's ideas. The sample was heated on one end and cooled on the other. The temperature distribution across the sample was measured and a value of k calculated. Using Fourier's Law the results of the one material tested, graphite, produced a k value of 129.16 4.69 W/mK the first time, 126.63 2.90 W/mK the next and 127.31 2.27 W/mK in the final run. The expected result based on the manufacturer's data sheet was 130 W/mK. The apparatus can now be used to measure various samples. Additionally, methods have been developed to estimate the errors associated with each new measurement.
In order to provide a more user-friendly environment and a clearer benchmark for computational efficiency and to promote America's energy security through reliable, clean and affordable energy, a version of the Network Protocol Independent Performance Evaluator (NetPIPE) was created, which was written completely in the object-oriented Python language. NetPIPE performs simple ping-pong tests for increasing message sizes to determine network bandwidth and latency. The base code created last year by Science Undergraduate Laboratory Internship student Torrey Dupras, which implemented NetPIPE using Python and a Python module written in C, was modified to be purely Python, and its efficiency was compared with the previous version. The NetPIPE package was also modified to include code which documents power use during a NetPIPE experiment and outputs the results of a NetPIPE run using the Python matplotlib module to show graphs of various data. The power data was obtained by using a Watts up? PRO meter which registers the base power consumption of a device once per second. The results of the investigation revealed that, for both the implementations, there appeared to be a correlation between network bandwidth and rate of energy consumed. Further, the Python module had about one-half the peak bandwidth of the C module; however, it was much more portable to operating systems other than Linux. As energy rather than computing speed becomes the dominant factor in computer performance, these experiments could provide a base for efficiency measurements in the future and also a greater ease of access for those wishing to perform those measurements.
New solar cell architectures leverage nanostructured materials in attempting to achieve high light-to-electricity conversion efficiencies using low-cost materials and processes. One such example is a dye sensitized solar cell, wherein light is absorbed by an organic dye sensitizer (rather than by a semiconductor, as in a traditional solar cell), and the photogenerated charge transports out of the device through a nanostructured percolating titanium dioxide (TiO2) network. Because organic dyes have a limited spectral absorption range, they are not readily suited to capture all incident solar energy. Inorganic semiconductor quantum dots represent an alterative solar cell sensitizer with potential advantages because their spectral light absorption can be controlled by their size and composition. In principle, one can design a device having a spectral absorbance range well-matched to the incident solar spectrum by using an array of differently sized quantum dots thereby providing a pathway to higher performance efficiencies. We use solution-phase chemistry to synthesize cadmium sulfide (CdS) quantum dots with precise diameter control over the range of 2 to 10 nanometers, and corresponding control of peak optical absorption from 320nm to 365nm. We produce CdS particles using a reverse micelle method using a surfactant (AOT in heptane) allowing further integration into thin film devices using solution processing. We have characterized the optical properties of thin films of both CdS and TiO2 nanocrystal using ultraviolet-visible spectroscopy in order to determine their absorbance. We have measured the nanocrystal film morphologies (size, structure, and thickness) using scanning electron microscopy and profilometry in order to understand the effects of different methods of film deposition (spin coating versus doctor-blading). Spin coating of both CdS and TiO2 nanocrystals yields uniform, three-dimensional nanocrystalline thin films. We have fabricated nanocrystal thin film devices by sandwiching nanocrystalline films of either CdS or TiO2 between a transparent indium-tin oxide electrical contact and an aluminum contact deposited by thermal evaporation. In both CdS and TiO2 nanocrystal devices, the device current increases with applied voltage. Under simulated solar illumination, the conductance of both CdS and TiO2devices increases, consistent with excitation of photogenerated carriers in the semiconductor nanocrystal film network.
Biochemically produced cellulosic ethanol holds promise for the future of renewable liquid transportation fuels. Cellulosic ethanol must demonstrate an economic and large-scale viability in order to realize its full potential, and therefore intermediate processes such as enzymatic cellulose digestion must be more fully understood. In this study, the digestibility of dilute sulfuric acid pretreated corn stover was explored with respect to pretreatment severity, particle size, and regimen of pH adjustment before enzymatic hydrolysis. A series of experiments were performed at 20% insoluble solids weight, which is considered a high solids concentration because the material has an appreciable yield stress. Conversion of cellulose to glucose and cellobiose by 20 mg protein/ g cellulose GC220 enzyme was assessed over a period of 7 days. Each sample consisted of 60 g of slurry in 125-mL cylindrical bottles rotated on mechanized rollers at 4 rpm incubated at 48.5°C. Enzymatic hydrolysis of pretreated insoluble solids suspended in DI water and citrate buffer was performed in duplicate with and without mechanical size reduction. GC220 enzyme is most effective at a pH near 5, thus necessitating the buffer in the insoluble solids samples. Pretreated whole biomass slurries include soluble and insoluble solids and have a pH around 1.5 2.0. Ammonium hydroxide was mixed into the whole slurry samples to increase the pH before enzyme addition. Results showed that increasing extent of pretreatment predicts increasing digestibility of the pretreated insoluble biomass samples in both cases. Despite particle size reduction of 10-40%, there was little difference in conversion between the two different particle size distribution materials. The results indicate that pretreatment is effective and the well-established correlation between smaller particle size and high digestibility is not a simple causal relationship. In all cases, whole slurry is more difficult to break down with enzymes than insoluble solids suspended in the equivalent amount of water, but ultimately this challenge will need to be overcome to efficiently convert biomass to fuels. This series of experiments begins to show the effect that process-relevant conditions have on cellulose digestibility. Further investigation into these operations can lead to modifications and improvements that may help cellulosic ethanol become the liquid fuel of the future.
The 12C(α, γ)16O reaction rate strongly affects the relative abundances of chemical elements, as well as when core collapse supernovae occur. There have been several attempts to measure the reaction rate, but the Coulomb barrier between the carbon nucleus and the α-particle inhibits direct measurement at stellar energies. In a proposed experiment, a water-filled bubble chamber will be used to measure the reverse reaction rate. This technique will accurately measure the reaction rate closer to stellar energies than previous experiments have accomplished. A potential background source is photoneutrons from the γ-ray beam collimator entering the bubble chamber and generating a false signal. To minimize this effect, a Monte Carlo simulation has been performed to compare the number of photoneutrons created in lead, copper, and aluminum collimators. It was found that 30 cm of copper would be an effective beam collimator by stopping 99.8% of γ-rays and generating no photoneutrons. The simulation also compared the effectiveness of concrete, polyethylene, and water as neutron shields. These simulations show that polyethylene consistently stops the most neutrons at relevant energies. Further simulation will be required to evaluate shielding materials for cosmic ray neutrons, which can also generate false signals.
Obtaining structural information of nano structured materials often requires electron microscopy for suficient spatial and crystallographic resolution. This study uses Raman spectral imaging to extract information regarding crystalline orientation and structure by non-invasive means. Seeking a correlation between crystallographic facet and favored Raman mode, Gallium Nitride (GaN) nanowires were imaged by confocal Raman microscopy with a 532nm laser, and scanning electron microscopy. Raman spectral maps containing pixel-by-pixel spectra were acquired. Comparison to scanning electron microscope (SEM) images revealed that for regularly-shaped wires at least 230nm in width, the E2 mode is observed more strongly in the [1 1 2] and [-1 -1 2] "smooth" facets of the wire, while the A1(TO) mode is only observed in the [0 0 1] "rough" facet, suggesting a strong surface-structure dependence of Raman signal that can be exploited for imaging. Further experimentation on irregular and small wires that exhibit only the E2 peak, on other favored modes in GaN, and with other group III/V nitrides is recommended.
The heart of the High Intensity Neutrino Source (HINS) linear accelerator (linac) is a magnetron-type, circular aperture H source, which is currently being tested at Fermilab. Although this prototype already delivers the beam current and emittance required by the HINS project, an exploration of whether or not the performance of the source could be improved was undertaken. To this end, the extraction geometry of the source was simulated with SIMION 8.0 and Finite Element Method Magnetics. The effects of changing the angle of the extraction cone (cone angle), the size of the gap between the extraction cone and the source plate (extraction gap), and the aperture of the extraction cone (extraction aperture) were studied. These parameters were chosen because we thought that they would have the greatest impact on space charge effects, which is a major source of emittance growth in this ion source. Based on the results of these simulations, four different configurations were ultimately tested in the ion source. The simulations indicated that the final emittance of the source should be significantly decreased by utilizing geometry with a 45 degree cone angle, a 4 mm extraction gap with extraction voltage of 25000 V, and a 3 mm extraction aperture. Subsequent emittance measurements on the ion source have confirmed this result. This new geometry also allows the source to output a higher current beam with the same duty factor.