Tag Archives: agriculture

Farmers are typically time and often resource constrained

While only including tillage treatments with residue incorporation establishes systems with similar residue input levels, it arguably poorly reflects farmers’ predominant practices in mixed crop-livestock farming systems – especially in sub-Saharan Africa and South Asia – in which residues tend to be exported from fields for feed, fuel, housing materials or other purposes . As such, the applicability of meta-analytical results to smallholder farming conditions in either sub-Saharan Africa and South Asia may be questioned. Given the large variation in crop management practices that result from differences in the scale of farming operations, the nature of farm enterprises and cropping patterns in different farming systems, one may therefore ask: Does the presentation of average results from ‘global meta-analyses’ in agronomy make sense? Our case studies show the ways in which the practical value of meta-analyses to provide comprehensive evidence on topics of development relevance is undermined by the social construction of treatment categories that may be decoupled from the conditions faced by farmers themselves.Most meta-analyses reviewed in this study used primary data from small-plot agronomic trials. The problems associated with extrapolating results from small plot experiments to whole fields, cropping systems and farming systems have however been widely acknowledged . These problems also affect meta-analysis. Many manage multiple separate fields – each of which may be environmentally heterogeneous – across landscapes. Farmers may therefore not be able to rigorously and evenly implement recommended crop management practices across fields and farm units with the same precision as researchers managing small-plot trials. This therefore casts some doubt about the usefulness of data from small-plot trials. Kravchenko et al. ,for example,blackberries in containers demonstrated that yield results from small-plot OA experiments were not always consistent with field-scale measurements of the same treatments.

Caution is therefore needed when extrapolating results from small-plot research to the field, farming system, landscape and global levels. These problems are most apparent in the OA case study. Badgley et al. , for example, extrapolated OA yield responses from plot studies to the global agricultural system, concluding that OA could feed the world’s population with nitrogen requirements supplied in situ by legumes, without expanding the footprint of agriculture. Connor conversely pointed out that soil moisture deficits would likely constrain the productivity of legumes in arid environments. He also noted that rotations with legumes may also not be feasible where legumes are less profitable or important than other crops for income generation and food production. Assessing productivity on a yield per unit of time basis, rather than yield alone, may therefore be an appropriate alternative in such comparisons . Leifeld also referenced landscape-scale considerations when contesting data presented by Ponisio et al. . He contended that OA is unable to cope with high-fecundity and rapidly dispersing pests, which could result yield losses more severe than observed in isolated, small-plot experiments. Leifeld also evoked ‘Borlaug hypothesis’ arguments that low-yielding farming systems may require the conversion of natural ecosystems to meet expanding food demand, thereby negatively affecting biodiversity. Ponisio and Kremen countered with evidence of the positive effects of organic and ecologically managed farmland on pest suppression at the landscape scale. They also highlighted the study of Meyfroidt et al. , who showed that higher yields and profitability can also drive agricultural expansion and deforestation under conventional practices. Considering the complexity of these problems, Brandt et al.proposed that bias could be reduced and science quality increased if researchers using meta analysis make their research protocols and intended methods publically available, for example, through online posting or journal publication, prior to undertaking meta-analysis. ‘Pre-registration’ of planned studies may be a logical suggestion , though it implies serious changes in research practice and re-thinking of how journals accept papers and conduct peer-review. This proposition has therefore not yet been widely applied in agronomy or other disciplines.

While there is no easy answer to how to rectify this conundrum, our review presents and important step in challenging underlying assumptions that meta-analysis can provide definitive and unifying conclusions as proposed by Garg et al. , Borenstein et al. , Rosenthal and Schisterman and Fisher .Agricultural expansion is the main cause of tropical deforestation , highlighting the trade offs among ecosystem services such as food production, carbon storage, and biodiversity preservation inherent in land cover change . Expansion of intensive agricultural production in southern Amazonia, led by the development of specific crop varieties for tropical climates and international market demand , contributed one third of the growth in Brazil’s soybean output during 1996–2005 . The introduction of cropland agriculture in forested regions of Amazonia also changed the nature of deforestation activities; forest clearings for mechanized crop production are larger, on an average, than clearings for pasture, and the forest conversion process is often completed in o1 year . How this changing deforestation dynamic alters fire use and carbon emissions from deforestation in Amazonia is germane to studies of future land cover change , carbon accounting in tropical ecosystems , and efforts to reduce emissions from tropical deforestation . Fires for land clearing and management in Amazonia are a large anthropogenic source of carbon emissions to the atmosphere . Deforestation fires largely determine net carbon losses , because fuel loads for Amazon deforestation fires can exceed 200 Mg C ha1 . Reductions in forest biomass from selective logging before deforestation are small, averaging o10 Mg C ha 1 . In contrast, typical grass biomass for Cerrado or pasture rarely exceeds 10 Mg C ha 1 and is rapidly recovered during the subsequent wet season . Yet, the fraction of all fire activity associated with deforestation and combustion completeness of the deforestation process remain poorly quantified . Satellite fire detections have provided a general indication of spatial and temporal variation in fire activity across Amazonia for several decades . However, specific information regarding fire type or fire size can be difficult to estimate directly from active fire detections because satellites capture a snapshot of fire energy rather than a time-integrated measure of fire activity .

Overlaying active fire detections on land cover maps provides a second approach to classify fire type. Evaluating fire detections over large regions of homogenous land cover can be instructive , but geolocation errors and spurious fire detections may complicate these comparisons, especially in regions of active land cover change and high fire activity such as Amazonia . Finally, postfire detection of burn-scarred vegetation is the most data-intensive method to quantify carbon emissions from fires. Two recent approaches to map burn scars with Moderate Resolution Imaging Spectroradiometer data show great promise for identifying large-scale fires , yet neither algorithm is capable of identifying multiple burning events in the same ground location typical of deforestation activity in Amazonia. Deriving patterns of fire type, duration and intensity of fire use, and combustion completeness directly from satellite fire detections provides an effi- cient alternative to more data and labor-intensive methods to estimate carbon emissions from land cover change. We assess the contribution of deforestation to fire activity in Amazonia based on the intensity of fire use during the forest conversion process,blackberry container measured as the local frequency of MODIS active fire detections. High confidence fire detections on 2 or more days in the same dry season are possible in areas of active deforestation, where trunks, branches, and other woody fuels can be piled and burned many times. Low-frequency fire detections are typical of fires in Cerrado woodland savannas and for agricultural maintenance, because grass and crop residues are fully consumed by a single fire. The frequency of fires at the same location, or fire persistence, has been used previously to assess Amazon forest fire severity , adjust burned area estimates in tropical forest ecosystems , and scale combustion completeness estimates in a coarse-resolution fire emission model . We build on these approaches to characterize fire activity at multiple scales. First, we compare the frequency of satellite fire detections over recently deforested areas with that over other land cover types. We then assess regional trends in the contribution of high frequency fires typical of deforestation activity to the total satellite-based fire detections for Amazonia during 2003–2007. Finally, we compare temporal patterns of fire usage among individual deforested areas with different post clearing land uses, based on a recent work to separate pasture and cropland following forest conversion in the Brazilian state of Mato Grosso with vegetation phenology data . The goals of this research are to test whether fire frequency distinguishes between deforestation fires and other fire types and characterize fire frequency as a function of post clearing land use to enable direct interpretation of MODIS active fire data for relevant information on carbon emissions.We analyzed active fire detections from the MODIS sensors aboard the Terra and Aqua satellite platforms to determine spatial and temporal patterns in satellite fire detections from deforestation in Amazonia during this period.

Combined, the MODIS sensors provide two daytime and two night-time observations of fire activity. Figure 1 shows the location of the study area and administrative boundaries of the nine countries that contain portions of the Amazon Basin. For data from 2002–2006, the date and center location of each MODIS active fire detection, satellite , time of overpass, 4 micron brightness temperature , and confidence score were extracted from the Collection 4 MODIS Thermal Anomalies/Fire 5-min swath product at 1-km spatial resolution . Beginning in 2007, MODIS products were transitioned to Collection 5 algorithms. Data for January 1–November 1, 2007 were provided by the Fire Information for Resource Management System at the University of Maryland, College Park based on the Collection 5 processing code. Seasonal differences in fire activity north and south of the equator related to precipitation were captured using different annual calculations. North of the equator, the fire year was July–June; south of the equator, the fire year was January–December. Our analysis considered a high-confidence subset of all MODIS fire detections to reduce the influence of false fire detections over small forest clearings in Amazonia . For daytime fires, only those 1-km fire pixels having 4330 K brightness temperature in the 4-mm channel were considered. This threshold is set based on a recent work to identify true and false MODIS fire detections with coincident high-resolution satellite imagery , comparisons with field data , and evidence of unrealistic MODIS fire detections over small historic forest clearings in Mato Grosso state with 420 days of fire detections per year in 3 or more consecutive years, none of which exceeded 330 K during the day. Daytime fire detections 4330 K correspond toa MOD14/MYD14 product confidence score of approximately 80/100. The subset of high-confidence fires includes all night-time fire detections, regardless of brightness temperature. Differential surface heating between forested and cleared areas during daylight hours that may contribute to false detections should dissipate by the 22:30 or 01:30 hours local time overpasses for Terra and Aqua, respectively. Subsequent references to MODIS fire detections refer only to the high-confidence subset of all 1-km fire pixels described earlier.The simple method we propose for separating deforestation and agricultural maintenance fires is based on evidence for repeated burning at the same ground locations. The spatial resolution of our analysis is de- fined by the orbital and sensor specifications of the MODIS sensors and the 1-km resolution bands used for fire detection. The geolocation of MODIS products is highly accurate, and surface location errors are generally o70 m . However, due to the orbital characteristics of the Terra and Aqua satellite platforms, the ground locations of each 1-km pixel are not fixed. We analyzed three static fire sources from gas , mining , and steel production in South America to identify the spatial envelope for MODIS active fire detections referencing the same ground location. Over 98% of the high-confidence 2004 MODIS active fire detections from Terra and Aqua for these static sources were within 1 km of the ground location of these facilities. Therefore, we used this empirically derived search radius to identify repeated burning of forest vegetation during the conversion process. High-frequency fire activity was defined as fire detections on two or more days within a 1-km radius during the same fire year.

TCS has been thought to act non-specifically by attacking and destroying bacterial membranes

In this study we have shown that feed backs are significant in both directions and have also shown that money and exchange rate shocks affect prices. Thus, any reduction in government expenditure in agriculture affects the path by which price shocks feedback on money and the exchange rate. From a policy perspective, this is very important, since it implies that any change in government support of the farm sector should be evaluated from an integrated market point of view. This more integrated or global perspective is needed because expenditures and budget deficits, monetary, exchange rate, and farm policies are significantly related and their interactions far too strong to be neglected. Triclosan is a non-agricultural pesticide widely used as an antibacterial agent in common medical, household and personal care products in the range of 0.1%–0.3% . The use of TCS has increased worldwide over the last 30 years . The broad household use of products containing TCS results in the discharge of TCS to municipal wastewater treatment plants, and it has been detected in effluents and sewage sludge in Europe and the United States .The mode of action of TCS on bacteria is through inhibition of fatty acid synthesis by targeting enzymes specific to bacteria . Since fatty acid biosynthesis is a fundamental process for cell growth and function; the ability to inhibit this makes TCS a particularly effective antimicrobial compound. Bio-solids are the nutrient-rich byproduct of wastewater treatment operations and large quantities are generated. For example, approximately 750,000 dry tons is produced annually in California and 54% of these bio-solids are applied on agricultural lands, 16% are composted and the remaining 30% goes to landfills . Concerns about potential health and environmental effects of land application of bio-solids include possible off-site transport of pathogens, heavy metals,growing raspberries in containers and trace organic constituents such as TCS . A less explored set of potential impacts is how TCS and other bio solid-borne contaminants affect ecosystem processes and associated soil microbial communities.

Potential impacts on soil microorganisms are important to assess since these organisms mediate much of the nitrogen, carbon and phosphorous dynamics in soil, biodegrade contaminants, create soil structure, decompose organic compounds, and play a major role in soil organic matter formation . We hypothesized that bio-solids containing TCS would have detrimental effects on soil microbial communities by decreasing biomass and altering community composition in agricultural soil. Our objectives were to evaluate the effects of increasing amounts of TCS on soil microbial community composition in the presence and absence of bio-solids. We used phospholipid fatty acid analysis to characterize the response of microbial communities; the method provides information about microbial community composition, biomass, and diversity . Experiments in which TCS was added to soil without bio-solids allowed the relative effects of bio-solid and TCS addition on microbial community composition and function to be compared and also provided a “secondary control” because TCS-free municipal bio-solids are essentially unavailable in the United States . Triclosan was purchased from Fluka . Yolo silt loam was collected from the Student Experimental Farm at the University of California, Davis at a depth of 0 to 15 cm. The soil was passed through a 2 mm sieve and stored at 4 °C until use. Bio-solids originated from a municipal wastewater treatment plant in Southern California that employed a conventional activated sludge treatment system followed by aerobic sludge digestion. Bio-solids from this system were selected for study because they had the lowest concentration of TCS among those collected from 10 different wastewater treatment plants in California . The soil and bio-solid physiochemical properties are reported in Table 1 and were determined using standard techniques . The soils were moistened to 40% water-holding capacity, which is equivalent to 18% water content in our experiments, and pre-incubated for 7 days at 25°C to allow time for normal microbial activity to recover to a constant level after disturbance. The pre-incubated 50 g of soil was weighed into 200 ml glass bottles to make three replicates per treatment. For the bio-solid amended soil sample, 20 mg/g of bio-solid was added. Each treatment sample was then spiked with TCS to achieve final concentrations of 10 or 50 mg/kg using TCS stock solutions prepared in acetone, as recommended by Waller and KooKana .

This spiking level was chosen as a conservative upper bound on anticipated soil concentrations in the field. The lower spiking level is below the mean concentration observed in US bio-solids and the higher level is below the 95th percentile for US bio-solids ; adding bio-solids to soils at typical application rates would produce soil concentrations ~50–200 times lower. Control samples were also prepared with acetone only. After that, the solvent was allowed to evaporate inside the fume hood before the samples were thoroughly mixed. The microcosms were incubated in the dark at 25°C for 0, 7 and 30 days. Every week, each vial was opened to help keep conditions aerobic and the water content of each set of samples was measured and water was added as needed to maintain target moisture levels. At each sampling time, the remaining TCS was measured by drying 3–5 g samples at 70°C for 24 hours and homogenizing with a mortar and pestle. Replicate 1 g subsamples of each dried sample were placed in centrifuge tubes, spiked with deuterated trichlorocarban in methanol, air dried under a fume hood to remove the methanol, and then mixed well. Extraction was performed by adding 15 mL of 1:1 acetone and methanol to the centrifuge tube. Samples were extracted on a shaker table for 24 hours at 295 rpm and 55 °C and then centrifuged for 30 min at 4,100 g. The supernatant was diluted as needed to ensure that the concentration remained within the linear portion of the calibration curve. The extracts were analyzed for TCS using LC-MS/MS. Additional details regarding the extraction and analysis procedures can be found in Ogunyoku & Young . Recoveries of deuterated TCC ranged from 63–115% during extraction and analysis.As expected, the bio-solids contained far larger amounts of nitrogen and carbon than the Yolo soil . Even though the bio-solids constituted less than 2% of the amended soil, they contributed nearly 50% of the total nitrogen and 40% of the total carbon in the amended soil system. The bio-solids contained an abundance of nutrients accumulated as by-products of sewage treatment in forms likely to be more labile than equivalent nutrients present in the soil. As will be discussed further, the greater availability of C and N in the SB than soil treatments had a strong influence on some of the results, especially at the early time points. In the following section, therefore, it is useful to remember that all SB treatments contain more available C and N than all soil treatments. The initial concentration of TCS in unspiked SB samples was very low ,large plastic pots for plants fell below the quantitation limit for TCS after 7 days, and was not detectable after 30 days of incubation. Significant TCS bio-degradation was observed in spiked soil and SB samples during incubation and the data were well described using a first order model as indicated by linear plots of ln against time . Degradation trends were consistent at the two spiking levels for each sample type but bio-solid addition significantly reduced degradation rates at both spiking levels compared with un-amended samples. The percentage of TCS removed was approximately two times greater in soil than in SB samples. Approximately 80% of the TCS was removed over 30 days in soil treated with either 10 mg/kg or 50 mg/kg of TCS, but no more than 30% was transformed in the corresponding SB microcosms.

The reduced bio-degradation in the SB microcosms may have resulted from the ~40% higher carbon content in the SB microcosms, which would be expected to increase the soil-water distribution coefficient by a comparable amount. Reduced TCS concentration in soil pore water would be expected to slow bio-transformation, potentially in a nonlinear fashion. Another possible contributor to the slower degradation of TCS in SB is the greater availability of alternative, likely more easily degradable, carbon sources in SB than soil microcosms, reducing the use of TCS as a substrate. Selective bio-degradation of one carbon source, and inhibition of the degradation of other chemicals also present, has been observed for mixtures of chemicals in aquifers . To assess which of these mechanisms was controlling, measured Freundlich isotherm parameters for TCS adsorption on bio-solid amended Yolo soil were used to calculate equilibrium pore water concentrations in the soil and SB microcosms over the course of the experiment. Using estimated pore water concentrations of moistened soil and SB samples, instead of total soil concentrations to perform half-life calculations, resulted in modest increases in the rate constants and decreases in half-lives of soil samples and did not narrow the significant gap between half lives in soil and SB . This suggests that the primary reason for the slower degradation of TCS in bio-solid amended soils is the increase in more labile forms of carbon because organic material is highly porous and has a lower particle density. Previous research shows that TCS biodegrades within weeks to months in aerobic soils , although Chenxi et al., found no TCS degradation in bio-solids stored under aerobic or anaerobic conditions, Kinney et al., observed a 40% decrease in TCS concentrations over a 4-month period following an agricultural bio-solids application. Because the slopes of the lines in Fig. 1 are not significantly different as a function of spiking level , the slopes were averaged for each treatment type, yielding apparent first order rate constants of 0.093±4% d−1 for soil samples and 0.024±41% d−1 for SB samples where the percent error represents the relative percent difference between the 10 mg/kg and 50 mg/kg degradation curves. These apparent rate constants translate to half-life estimates of 7.5 d in soils and 29 d in bio-solid amended soil. The estimated half-life of TCS in soil is within the range of previously reported half-lives of from 2.5 to 58 d in soil . The half-life determined here in bio-solid amended soils is lower than the one available literature value of 107.4 d . The microbial biomass decreased in the TCS spiked samples after 7 or 30 days of incubation in comparison with the unspiked controls, for both soil and SB, and the decline was statistically significant at 50 mg/kg . Although exposure to TCS caused declines in biomass in both soil and SB microcosms, the total microbial biomass was two times higher in SB than soil probably due to the increased availability of nutrients and/or possibly due to addition of bio-solid associated microorganisms in the latter . The total number of PLFAs ranged from 42–47 in soil and 48–59 in SB . No significant change in numbers of PLFAs was evident with increasing dosage of TCS for any incubation time suggesting that TCS addition did not adversely affect microbial diversity. Microbes respond to various stresses by modifying cell membranes, for example by transforming the cis double bond of 16:1ω7c to cy17:0, which is more stable and not easily metabolized by the bacteria, reducing the impact of environmental stressors . Consequently, the ratio of cy17 to its precursor has been employed as an indicator of microbial stress that has been associated with slow growth of microorganisms . Increases in this stress biomarker were observed in both soil and SB samples as TCS concentrations increased , suggesting that TCS has a negative effect on the growth of soil microorganisms. The overall level of cy17 to its precursor is lower in SB than soil samples, suggesting that nutrients contributed by the bio-solids reduce stress on the microbial community. Our study agreed with a previous study showed that carbon added to soil led to a reduction in the cy17 fatty acid TCS additions, however, increased the stress marker compared with that detected in the corresponding samples with no added TCS. A broader implication of this result is that presence of bio-solids may mitigate the toxic effects of chemicals in soil, or chemicals added in combination with bio-solids, on soil microbial communities. Groupings of microbial communities, based on CCA analysis of their composition as estimated by PLFA, were distinguished primarily by whether they were in soil or SB treatments and secondarily by time since spiking .

Successfully quantifying the ability of media to grow cells forms the backbone of the novelty of this dissertation

The other aspect of private sector involvement is perhaps more mixed in its consequences, compared to individual farmers’ efforts. Indian agriculture has long been heavily influenced by powerful intermediaries, who may combine participation in credit and input, and even output and land markets, to earn economic rents associated with market power, in a phenomenon wellstudied as interlinkage . Market intermediaries and other private actors in the agricultural supply chain certainly provide essential products and services for the success of Punjab’s present agricultural system, but it is not clear that their incentives for enabling innovation are aligned with maximizing social welfare, just as, with imperfect competition, static resource allocation may not satisfy that optimality property. Given the foregoing discussion, as well as the issues highlighted in previous sections, it is reasonable to suggest that beneficial innovation in Punjab agriculture will not occur solely through the private sector. At an abstract level, the problems of asymmetric information, externalities, the public good nature of innovations and imperfect competition in various markets along the agricultural value chain all point towards some public sector involvement in facilitating greater innovation, especially that which incorporates crop diversification. It is arguably the case that the state government can make targeted interventions that provide effective nudges towards innovation, as well as adoption and diffusion of innovations, even in the face of the severe constraints imposed by the state’s own fiscal situation, and the conduct of national food procurement policy. Some of the barriers to innovation have to be overcome by relatively large financial investments in physical infrastructure, but the state government can catalyze the private sector to undertake these investments by improving the ease of doing business in the state. The public sector’s focus can and should be on improving the knowledge available to farmers, finding ways to overcome their switching costs, and providing them with better insurance as they move towards activities that involve greater risk and uncertainty.Myoblast, myocytes, and fibroblasts are cells of greatest interest for the field of cellular agriculture. For texture and taste, plastic pots for planting adipocytes may be used and grown either separately or co-cultured with muscle cells. The choice of animal will also have an effect on the final product and production process because cells from different animals will have different growth characteristics, morphology, and product qualities.

The majority of these cell lines are adherent, meaning they require a suitable substrate to grow. Ideally, cells may be grown in suspension culture , bringing cellular agriculture in line with typical pharmaceutical practice such as CHO cells. Micro-carriers may also be used to increase the surface area of the total surface. Proliferating many cells is not the only consideration in cellular agriculture. Stem cells differentiate into more complex tissue structures depending on time and environmental conditions, which is critical in forming final products that consumers are willing to purchase. For example, C2C12 immortalized murine skeletal muscle cells differentiate into myotubes at high density and when exposed to DMEM + 2% horse serum . However, because cell differentiation often precludes further proliferation, cells must be periodically pass aged to provide more physical space for growth. This is typically done by detaching the cells from the substrate using trypsin enzyme and physically placing the cells onto additional surface area. Fundamental techniques in cell culture can be found in and a general overview of mammalian cell culture for bio-production uses can be found in . Figure 1.1b shows a high level overview of the cellular agriculture process. Throughout this entire process, media is used to support cells by providing them with nutrients, signal molecules, and an environment for growth. We are focused on reducing the cost of the media while supporting cell proliferation. This is because the media has been identified as the largest contribution to cost . The main considerations for the design of cell culture media in cellular agriculture are the media must be inexpensive, it must be free of animal products, and it must support long-term proliferation of relevant cell lines and final differentiation into relevant products. The most basic part of a cell culture medium is the basal component, which supplies the amino acids, carbon sources, vitamins, salts, and other fundamental building blocks to cell growth. The optimal pH of cell culture media is around 7.2 – 7.4 which is achieved through buffering with the sodium bicarbonate – CO2 or organic buffers like HEPES. Temperature should be maintained at around 37C at high humidity to prevent evaporation of media. Osmolarity around 260 – 320 mOsm/kg is maintained by the concentration of inorganic ions salts such as NaCl as well as hormones and other buffers. Inorganic salts also supply potassium, sodium, and calcium to regulate cell membrane potential which is critical for nutrient transport and signalling.

Trace metals such as iron, zinc, copper, and selenium are also found in basal media for a variety of tasks like enzyme function. Vitamins, particularly B and C, are found in many basal formulations to increase cell growth because they cannot be made by the cells themselves. Nitrogen sources, such as essential and non-essential amino acids, are the building blocks of proteins so are critical to cell growth and survival. Glutamine in particular can be used to form other amino acids and is critical for cell growth. It is also unstable in water so is typically supplemented into media as L-alanyl-L-glutamine dipeptide . Carbon sources, primarily glucose and pyruvate, are essential as they are linked to metabolism through glycolysis and the pentose-phosphate pathway. Fatty acids like lipoic and linoleic acid act as energy storage, precursor molecules, and structural elements of membranes and are sometimes supplied through a basal medium like Ham’s F12. Having a sufficient concentration of all of these components is required for proliferating mammalian cells across multiple passages as per above. Having a robust basal media is a necessary but not sufficient condition for long-term cell proliferation and differentiation. Serum is a critical aspect of cell culture because it provides a mix of proteins, amino acids, vitamins, minerals, buffers and shear protectors . Serum stimulates proliferation and differentiation, transport, attachment to and spreading across substrates, and detoxification. Serum has large lot-to-lot variability, zoonotic viruses and contamination , as well as the ethical issues associated with collecting serum from animals. Therefore, while it often simplifies cell growth and differentiation, it is critical to remove serum as per point . Supplementation with growth factors like FGF2, TGFβ1, TNFα, IGF1, or HGF is a common way to induce growth of mammalian muscle cells without the use of serum. Transferrin, another protein found in serum, fulfills a transport role for iron into the cell membrane. PDGF and EGF are polypeptide growth factors that initiate cell proliferation. Such components enhance cell growth but are expensive and comprise the vast majority of the cost of theoretical cellular agriculture processes. Much work has been done on developing serum-free media. The E8 / B8 medium for human induced pluri potent stem cells is based on Dulbecco’s Modified Eagle Medium / F12 supplemented with insulin, transferrin, FGF2, TGFβ1, ascorbic acid, and sodium selenite. Beefy-9 by is similar to E8 but with additional albumin optimized for primary bovine satellite cells. The approach we will take in this dissertation is to use prior knowledge of biological processes to construct a list of potential media components, and use design-of-experiments methods to optimize component concentrations based on cell proliferation. This will be particularly useful for cellular agriculture because by developing and using these statistical tools, as we will see in the next section, DOEs will help develop media quickly and efficiently.

One of the most difficult aspects of this work is measuring the quality of media. Viable cells must be counted after a period of time over which the scientist believes the medium will have an effect, which changes depending on cell type, media components, cell density, ECM, pH, temperature, osmolarity, and reactor configuration. If cells grow by adhering to a substrate, then sub-culturing / passaging may play a role on the health of a cell population,drainage for plants in pots so discounting this effect may have deleterious effects on media design quality. Counting using traditional methods like a hemocytometer or more advanced automatic cell counters using trypan blue exclusion are labor-intensive and prone to error. Cell growth / viability assays are chemical indicators that correlate with viable cell number such as metabolism or DNA / nuclei count and can also be used to quantify the effect of media on cells. In chapter 5 we conducted many experiments with different assays and show the inter-assay correlations in Figure 1.3. Notice no assay is perfectly correlated with any other assay because they are collected with different methodologies and fundamentally measure different physical phenomena. For example, Alamar Blue measures the activity of the metabolism in the population of cells, so optimizing a media based on this metric might end up simply increasing the metabolic activity of the cells rather than their overall number. As some of these measurements can be destructive / toxic to the cells , continuous measurements to collect data on the change in growth can be tedious. Collecting high-quality growth curves over time may be accomplished using image segmentation and automatic counting techniques. Using fluorescent stained cells and images, segmentation can be done using algorithms like those discussed. Cells may even be classified based on their morphology dynamically if enough training data is collected to create a generalizable machine learning model.The primary means by which this dissertation will improve cell culture media is through the application of various experimental optimization methods, often called design-of-experiments . The purpose of DOEs are to determine the best set of conditions xto optimize some output yby sampling a process for sets of conditions in an optimal manner. If an experiment is time / resource inefficient, then optimizing the conditions of a system may prove tedious. For example, doing experiments at the lower and upper bounds of a 30-dimensional medium like DMEM requires 2 30 109 experiments. This militates for methods that can optimize experimental conditions and explore the design space in as few experiments as possible. DOEs where samples are located throughout the design space to maximize their spread and diversity according to some distribution are called space-filling designs. The most popular method is the Latin hypercube , which are particularly useful for initializing training data for models and for sensitivity analysis. Maximin designs, where some minimum distance metric is maximized for a set of experiments, can also allow for diversity in samples, with the disadvantage being that in high dimensional systems the designs tend to be pushed to the upper and lower bounds. Thus, we may prefer a Latin hypercube design for culture media optimization because media design spaces may be >30 factors large. Uniform random samples, Sobol sequences, and maximum entropy filling designs, all with varying degrees of ease-of-implementation and space-filling properties, also may be used. It cannot be known a priori how many sampling points are needed to successfully model and optimize a design space because it is dependent on the number of components in the media system, degree of non-linearity, and amount of noise expected in the response. Because of these limitations, DOE methods that sequentially sample the design space have gained traction, which will be talked about in the next section. A more data-efficient DOE is to split up individual designs into sequences and use old experiments to inform the new experiments in a campaign. One sequential approach is to use derivative-free optimizers where only function evaluations y are used to sample new designs x. DFOs are popular because they are easy to implement and understand, as they do not require gradients. They are also useful for global optimization problems because they usually have mechanisms to explore the design space to avoid getting stuck in local optima. The genetic algorithm is a common DFO where a selection and mutation operator is used to find more fit combinations of genes . In Figure 1.7, notice the GA was able to locate the optimal region of both problems regardless of the degree of multi-modality.

Rk is inversely related to volume fraction which showed a non-significant decrease in photoaged samples

Although we did not directly compare skin equivalents without adipose to AVHSEs here or directly compare culture time points, we have not observed any obvious changes in epidermal coverage compared to our previous work in vascularized human skin equivalents that do not contain a subcutaneous adipose compartment. While the model is customizable to study the effects of intrinsic and extrinsic aging factors, as a test case we have demonstrated suitability for studies in UVA photo aging due to the strong literature base of both in vitro and in vivo studies available for comparison. Finally, we demonstrated the accessibility of the model for both molecular and morphological studies . A key aspect of any HSE model is a differentiated and stratified epidermis. Here, N/TERT-1 keratinocytes were used to generate skin epidermis as previously completed. Importantly, N/TERTs are a suitable and robust substitute to primary keratinocytes which have disadvantages including limited supply, limited in vitro passage capabilities, and donor variability. HSEs generated with N/TERT keratinocytes demonstrate comparable tissue morphology, appropriate epidermal protein expression, and similar stratum corneum permeability when compared to HSEs generated with primary keratinocytes. Similar to prior models, we demonstrate AVHSEs appropriately model the skin epidermis with correct localization of involucrin , and cytokeratin , and nuclei localized in the lower stratified layers . Further, volumetric imaging and automated analysis allows for epidermal thickness to be robustly calculated. AVHSE present with median epidermal thicknesses within 90-100 µm, similar to values in both prior in vitro studies 100-200 µm and in vivo optical coherence tomography imaging of adult skin 59±6.4 to 77.5±10 µm 194. Consistent with prior in vitro and in vivo results showing UVA wavelengths predominantly impact dermal rather that epidermal layers, UVA photoaging resulted in no observable changes in epidermal thickness or expression of differentiation markers in AVHSE . In the dermis and hypodermis,square plastic plant pot skin is highly vascularized with cutaneous microcirculation playing important roles in thermal regulation and immune function.

Many prior HSE models have not included a vascular component4; however, there is increasing recognition of its importance. In the present work, we used collagen IV as a marker of the vascular basement membrane, enabling the automated segmentation and mapping of a vascular network within AVHSEs. The vascular VF of AVHSEs is lower than in vivo dermis , but prior work has shown this is tunable by using different cell seeding conditions . Optimizing the VF may be more involved in the AVHSE, since the ratio of adipose and vascular cells has been shown to be important in regulating tissue morphology. Thus, ratio of adipose and vascular cells would need to be optimized again for new cell and collagen densities. Adipose tissue is densely vascularized and the ability of adipocytes to generate lipid droplets and adipokines in the presence of endothelial cells is important to replicate the in vivo environment. Previous work has shown that in co-culture of endothelial cells and mature adipocytes can lead to dedifferentiation of mature adipocytes, but in homeostatic cultures ECs and adipocyte crosstalk is important. Through soluble factor release, ECs regulate lipolysis and lipogenesis and adipocytes regulate vasodilation and contraction. Secretion of adipokines by adipocytes aids vascular formation and adipose tissue stability. In prior work, Hammel & Bellas demonstrated that 1:1 is the optimal ratio for vessel network within 3D adipose, and we matched the 1:1 cell ratio in the present work. Quantification of vessel diameter in the Hammel & Bellas study shows that a 1:1 ratio of adipocytes to endothelial cells gives an average vessel diameter of ~10 µm235, our work supports this finding with a median inner vessel diameter of ~6 µm. Importantly, these data are within the range of human cutaneous microvascular of the papillary dermis . We did not observe morphological changes of VF and diameter within the vasculature due to photoaging. This is not entirely unexpected, as UVA exposure and its effects on vasculature are still poorly understood. While it is established that chronic UVA exposure can contribute to vascular breakdown, the duration of our studies may be too short to see this effect in diameter and VF . However, photoaging did induce an increase in diffusion length . Rk is a measure of the 90th percentile of distance from the vascular network and so a higher value corresponds to less coverage; values presented here match previous studies of vascularized collagen.

Rk of the vascular network for both control and photoaged samples was within the range of 51 – 128 µm which is importantly below the 200 µm diffusion limit. Upon photoaging, AVHSEs did demonstrate a significant increase in Rk compared to controls.In vascularized tissue, a high VF and low Rk is preferable and the Rk increase demonstrated indicates a loss in vascular coverage in photoaged AVHSEs. These findings conflict with studies of acute UV exposure in skin, which show stimulation of angiogenesis. It has been proposed that UV light exposure may improve psoriasis by normalizing disrupted capillary loops through upregulation of VEGF by keratinocytes. The AVHSE model could be used to more thoroughly test the effects of UV light and other molecular mechanisms it induces in future studies. The vascular networks extend from adipose to the epidermal-dermal junction , consistent with previous literature and to normal human skin histology/stereography. Further, we observed vasculature colocalized with the lipid droplet BODIPY staining , indicating recruitment of the vascular cells to the hypodermis. Importantly, the vascular networks in prior studies and the present AVHSE are self-assembled. While there are advantages to self-assembly, especially the simplicity of the method, it is important to note the limitations. Cutaneous microcirculation in vivo has a particular anatomical arrangement with two horizontal plexus planes, one deep into the tissue in the subcutaneous fat region and one just under the dermal-epidermal junction. Between these two planes are connecting vessels running along the apicobasal axis that both supply dermal tissues with nutrients and are an important part of thermoregulation. Although the AVHSEs presented here are fully vascularized up to the epidermal junction they do not recapitulate this organization. While not covered in this work, future studies could incorporate layers of patterned or semi-patterned vasculature to more closely match the dermal organization, depending on the needs of the researcher. In contrast to the epidermal and some vascular components, photoaging impacted the hypodermis. Volumetric imaging of BODIPY, which stains lipid droplets, was used to identify the adipose. While small reductions in the morphological parameters were observed, they were not significant, suggesting there was not large-scale necrosis or loss of fat mass. However, there was a significant decrease in the intensity of BODIPY staining, indicating decreased lipid levels. This is consistent with photoaging of excised human skin showing that UV exposure decreases lipid synthesis in subcutaneous fat tissue228. We further collected culture supernatant and tested for the presence of adiponectin, IL6, and MMP-1. The data collected through ELISA show that this AVHSE model secretes both adiponectin and IL6, which are also present innative skin and both considered important adipokines. Elevated serum adiponectin levels are linked to anti-inflammatory effects in humans and centenarians have elevated levels of adiponectin.Decreased adiponectin has previously been associated with photoaging in both excised human skin that was sun-exposed compared to protected skin and in protected skin that was exposed to acute UV irradiation. Conversely,25 liter square pot IL6 is a key factor in acute inflammation in skin, and has been shown to regulate subcutaneous fat function. In prior studies of photoaging, IL6 has demonstrated an increase after UVA irradiation in monolayer fibroblast cultures and excised human skin. IL-6 is released after UV irradiation and has been linked to decreased expression of adipokine receptors and mRNA associated with lipid synthesis, decreases in lipid droplet accumulation, and enhanced biosynthesis of MMP1. However, after one week of photoaging we did not observe an increase in IL-6 or MMP-1 via ELISA. The absence of changes in IL-6 and MMP-1 expression but decreases in lipid accumulation and adiponectin are not expected results but they could be due to methodology differences in UVA exposure. We determined our UVA dose and exposure based on literature values. The dose used here was 0.45 ± 0.15 mW/cm2with exposure for 2 hours daily for 7 d which roughly converts to 3.24 J/cm2 per day and a total of 22.68 J/cm2.

Many studies do not report exposure time and/or present ambiguous time points. This, compounded with the practice of using doses based on sample pigmentation threshold and broad definition of UVA wavelengths is likely contributing to the discrepancy in IL-6 and MMP-1 expressions. Previous work has shown that neutralizing anti-IL-6 antibody prevents UV induced decrease of important fat associated mRNA and that IL-6 secreted from keratinocytes and fibroblasts following UV irradiation inhibits lipid synthesis. From previous work, it is clear that IL-6 secretion is upregulated by UVA and presence contributes to negative adipose function but more investigation is necessary to understand what UVA doses and exposures induce IL-6 and further at what time points after photoaging are these expressions quantifiable. In this model, it is possible that there were increases in IL-6 that contributed to adiponectin decreases in photoaged samples, these trends may have been caught with different media collection time points. Alternatively, other analysis of inflammatory responses and adipokines may show generalized inflammatory responses identified in literature and further, changes in dose/exposure or continued photo aging may mimic the previously shown effects. There are notable limitations of the AVHSE model presented. Although we have presented a skin model that is closer to both anatomy and biology of human skin in comparison to past HSEs, we have not modeled skin fully through inclusion of other features of in vivo skin such as immune and nerve components. Including a functional immune system is important in understanding autoimmune diseases, cancer, wound healing, and decline of immune function in aged skin. Additionally, neuronal cell inclusion will allow for modeling of sensory processes necessary for grafting and modeling of skin disorders associated with nerve dysregulation. Further, while the cell lines used in this study were chosen for their low cost and accessibility, primary cells or populations differentiated from induced pluripotent stem cells would more closely match the physiology in vivo. While changing cell populations would likely require some adjustment to the culture system, we have previously demonstrated that cell types can be replaced with minimal changes. We model epidermis, dermis, and hypodermis here, but we do not model the depth that is present in thick skin tissue; to mimic thicker skin the model would need to be taller. As nutrient and waste diffusion in tissues is limited to ~200 µm, thick tissues will likely require perfusion to maintain throughout culture. Vasculature in thicker skin has higher diameters, especially in the lower dermis and hypodermis, these can be up to 50 µm. Finally, for ease of use, initial collagen density in the AVHSE model is 3 mg/mL, much lower than in vivo densities. Decline of collagen density is an important aspect of skin aging, correlating with skin elasticity and wound healing. Varying collagen density influences vascular self-assembly, but higher collagen densities are possible through a variety of techniques, including dense collagen extractions146, and compression of the collagen culture. By incorporating these tools, AVHSE could be modified to more closely represent the in vivo dermal matrix. Further, the AVHSE method was demonstrated with low serum requirements; but serum was used for initial growth and the cultures are maintained for weeks without serum. Serum replacements during the growth phase could potentially provide a chemically defined xeno-free culture condition in beginning culture stages for greater reproducibility and bio-compatibility. The presented AVHSE model provides unique capabilities compared to cell culture, ex vivo, and animal models. Excised human skin appropriately models penetration of dermatological products but there is limited supply and high donor variability; replacing excised human skin with animal models or commercially available skin equivalents is not the best course of action because of the differences such as varying penetration rates, lipid composition, lipid content, morphological appearance, healing rates, and costs; and limitations of customization. AVHSE can be cultured using routinely available cell populations, are cost effective, and are customizable for specific research questions. Further, the model is accessible for live imaging, volumetric imaging, and molecular studies, enabling a wide range of quantitative studies.

Pre‐processing included development of PET estimates from the down scaled air temperature

We report three analyses: trends in the time slice characterizing the baseline time period ; the calibration and validation of basin discharge by comparing post‐processed runoff and recharge measures to derive discharge, and comparing that value to streamgage measurements; and a comparison of the historical and future conditions for BCM variables—precipitation, potential evapotranspiration, runoff, recharge, and climatic water deficit. We present the map‐based assessments, using the difference in magnitude for each variable; the number of standard deviations by which projected future conditions will differ from the standard deviation of baseline conditions; and the geographic variations across California of both historical and future projections. Temperature values are available, but for brevity, and because temperature has previously been more widely reported, this paper focuses on hydrological components.The process used to estimate hydrologic impacts of climate change at fine scales involved down scaling climate data for model input.The BCM then generated outputs as a series of hydrologic and associated variables. This section discusses: precipitation, air temperature, PET, snow pack, runoff, recharge, and climatic water deficit.During the 30‐year baseline period of 1971–2000, precipitation generally increased, with the exception of the deserts and eastern Sierra Nevada . Largest percentage increases are in the Great Valley, Central Western California, and Sierra Nevada. Both minimum and maximum air temperatures increased for all ecoregions, ranging from 0.5°C to 1.6°C for minimum air temperature and much less of an increase for maximum air temperature . Potential evapotranspiration increased throughout the state by about 3 percent. Recharge decreased by up to 24 percent in southwestern California, and by 11 percent in northwestern California, while all other ecoregions increased in recharge. Recharge in the Mojave Desert increased by 51 percent ,pot with drainage holes and in the Modoc Plateau by 42 percent . The change in climate over the 30‐year period is exemplified by the changes in snow pack in California, which integrates effects of precipitation and air temperature on the dominant water resource in California for water supply.

The snow pack in this region is the warmest in the western United States and is the most sensitive to small changes in air temperature. This is illustrated by the change in April 1 snow pack , where snow pack has diminished the most in extent in the northern portions of the state; whereas, the highest elevation snow pack in the southern Sierra Nevada mountains and Mount Shasta have actually increased in some locations. However, the dominant loss of April 1 snow pack results in less runoff to extend surface water resources throughout the summer season. This situation has implications for recharge and climatic water deficit as well. Corresponding to increases in precipitation, runoff increased over the baseline period in most locations in the state, notably the northern Sierra Nevada Mountains and parts of the Trinity Mountains in the northwestern ecoregion . Some declines are noted in the northwest, where the smallest change in precipitation occurred. Decreases in recharge are notable in the northwest portions of the state, with moderate decreases in the Sierra Nevada foothills and southern California mountains . Generally locations with little to no recharge, such as areas with deep soils or arid climate, also had little to no change in recharge indicated. Detailed views of basins in the Russian River watershed and Santa Cruz mountains are shown in Flint and Flint , illustrating the dominance of runoff in the Russian River watershed, where water supply relies heavily on reservoirs, in contrast to the reliance on groundwater resources and recharge in the Santa Cruz mountains. Increases in runoff in snow‐dominated regions, due to warming air temperatures, diminishes recharge, which is more likely to occur during the slow snowmelt season. This is confirmed for the northwestern ecoregion, where the Trinity Alps decreased in snow pack, and shows small increases for the Sierra Nevada, in contrast to other regions .Figure 9a shows the average annual climatic water deficit for 1971–2000. There is high climatic water deficit in the southern Central Valley and Mojave and Sonoran Deserts, and low climatic water deficit in the north coast and Sierra Nevada. Climatic water deficit declined over the baseline period in the central and northwestern California ecoregions and the Great Valley, while in all other regions, despite the increases in precipitation, climatic water deficit increased . This variable integrates energy loading and moisture availability from precipitation with soil water holding capacity. The distribution of moisture conditions that generally define the amount of water in the soil that can be maintained for plant use throughout the growing season and summer dry season corresponds very well to the established distribution of vegetation types. However, in many locations, shallow soils limit the contribution of precipitation. The lowest climatic water deficits in California are in regions with snow pack that, as it melts in the springtime, provides a longer duration of available water, thus maintaining a lower annual climatic water deficit, even despite shallow soils.

Locations in the south with higher PET have higher climatic water deficits.Precipitation has increased in most locations, but has declined in the desert and eastern Sierra Nevada. Air temperature and PET have increased in all ecoregions . This translates into increases in climatic water deficit in nearly all locations, and particularly those dominated by snow pack, such as the Sierra Nevada ecoregion and Trinity Mountains in the northwestern California ecoregion. The recorded increases in air temperature, particularly minimum air temperature, result in earlier snowmelt and reduce the ability of the snow pack to sustain the water available throughout the summer season. The deserts all increased in deficit with declining precipitation and increasing air temperature. However there are some small areas in the Great Valley ecoregion that experienced small decreases in deficit because of the ability of the deep soils to store the additional precipitation rather than result in recharge or runoff. Some moderating effects of coastal climatic conditions are seen in small valleys along the coast with decreases in deficit.In the analysis of the impacts from historic to future climate on hydrology, we characterized the changes in precipitation, PET, runoff, recharge, and climatic water deficit from the BCM for watersheds and for ecoregions, and compared changes in variables from historical to baseline periods and from the baseline period to the end of the twenty‐first century. Three types of map analyses were applied to this comparison: assessment of the difference in magnitude for each variable; the number of standard deviations of baseline conditions by which historic and projected future conditions differ; and a geographic review of the variations in hydrologic conditions across California for both historical and future time periods. A summary of variables by modified Jepson ecoregion and for the HUC 12 watersheds averaged for the extent of California was calculated. Overall, mean precipitation increased by 80 millimeters between 1911–1940 and 1971–2000 . Under the PCM scenarios, precipitation continued to increase to 2070–2099 , but it decreased under the GFDL scenario . Potential evapotranspiration increased 10 mm from historic to baseline time frames,large pot with drainage and increased under all future time frames between 51 and 104 mm. Runoff increased historically 36 mm. It increased under future PCM projections by 51 to 77 mm, but decreased under GFDL projections by 38 to 42 mm.

Finally, climate water deficit decreased by 16 mm from historic to baseline time; however, it increased under all projections between 40 and 174 mm, indicating increases in PET and decreases in available soil moisture resulting in lower actual evapotranspiration.While most of northern California got wetter from the historic to baseline time, only the northeast, an eastern area representing the high Sierra Nevada and Inyo/White mountains, and a few scattered watersheds saw an increase that was even one‐half a standard deviation from the baseline SD for the 30‐year mean, a pattern that is mostly repeated when looking at the statistically significant trends . This suggests that the trend in increased moisture is well within the baseline variability of precipitation from year to year. The same is true for the southern half of the region, which mostly shows a drying trend. As expected, given the GCMs selected, the PCM future scenarios forecast increased precipitation, and GFDL forecasts a drier future . However, compared to baseline precipitation variability and statistically significant change, only the desert ecoregions receive more than 0.5 SD more precipitation under PCM, while under GFDL A2, the northern half of California loses precipitation mostly between 0.5 and 0.9 SD . The calculation of PET using the Priestley‐Taylor equation assumes that PET is a function of, and is non‐linearly related to, air temperature. The application of PET in the BCM assumes that plants are in equilibrium with their environment and will transpire at maximum rates until the soil reaches the wilting point. Potential evapotranspiration increased from historical to baseline time periods in most of California, with the exception of a few places in the Sierra Nevada, where it decreased between 0.5 and > 2 SD of baseline PET values, with similar patterns in the significance values . The extreme change in these locations is due to cooling air temperature, but because PET is already low in these locations, due to the non‐linear relation between PET and air temperature, the change is greater than if the PET were initially high. Potential evapotranspiration is projected to increase under all scenarios and for all ecoregions and shows one of the strongest spatial patterns of all the variables, with nearly the entire region increasing by at least 1 SD, and statistically significant under the PCM projections, and by > 2 SD under the GFDL projections .Annual runoff values increased slightly in California between 1911–1940 and 1971–2000 , a change driven by increases throughout the northwest ecoregion, and in the northern Sierra Nevada. Looking at this difference relative to the standard deviation during the baseline time period, none of the watersheds had runoff increase by more than one standard deviation, but a few in the desert ecoregions decreased by more than one. This is because the annual runoff in these watersheds was less than 3 mm in 1911–1940 and less than 1 mm in 1971–2000. Comparing the baseline conditions to future scenarios , the PCM model shows an increase in runoff for all ecoregions except the Modoc Plateau , and especially in the Sierra Nevada and the coast ranges, while the GFDL model shows an almost inverse pattern of drying . Because of the very low runoff values in the baseline time period, the incremental increases in the desert regions of the study show future runoff to be above 1 SD under the PCM model. For the GFDL model, parts of the Sierra Nevada and the northeast region of the state show decreases in runoff above 0.5 SD of baseline . Note that statistically significant change differs from the SD view under the future scenarios, particularly in the desert systems, where much of the change while high in terms of standard deviations is not significant at the 0.05 level .Annual recharge values increased throughout the mountains and coast of northern California between 1911–1940 and 1971–2000 , similarly to runoff in distribution, but at a lower magnitude. Declines in recharge in the southern parts of the state and the Central Valley are at a similar magnitude. The difference between the time periods relative to the standard deviation during the baseline time period indicated very small changes outside the normal variability. The differences between recharge and runoff are more pronounced in the changes between baseline and the future scenarios . This difference is exemplified by a very important characteristic that results from warming, regardless of the direction of change in precipitation in future projections, and that is the alteration of seasonality, with a shorter wet season and longer dry season.For the wet scenarios , there are slight increases in recharge in the Central Western and Great Valley ecoregions , and the Cascade and Sierra Nevada , but in contrast to runoff there are declines in recharge in the Sierra foothills and the northwestern part of the state. Because of the compression of the wet season with warming, , in addition to the earlier onset of springtime snowmelt, there is less time with conditions conducive to recharge.

The calculation of excess water provides the water that is available for watershed hydrology

Modeled PET for the southwest United States has been calibrated to measured PET from California Irrigation Management Information System and Arizona Meteorological Network stations. Using PET and gridded precipitation, maximum and minimum air temperature, and the approach of the National Weather Service Snow‐17 model , snow is accumulated, sublimated, and melted to produce available water . These driving forces for the water balance have been calibrated regionally to solar radiation and PET data, and snow cover estimates have been compared to Moderate Resolution Imaging Spectroradiometer snow cover maps . However, the final calibrations of snowmelt and runoff have illustrated goodness‐of‐fit, as will be shown in the results.Available water occupies the soil profile, where water will become actual evapotranspiration, and may result in runoff or recharge, depending on the permeability of the underlying bedrock. Total soil‐water storage is calculated as porosity multiplied by soil depth. Field capacity is the soil water volume below which drainage is negligible, and wilting point is the soil water volume below which actual evapotranspiration does not occur . Once available water is calculated, it may exceed total soil storage and become runoff, or it may be less than total soil storage but greater than field capacity and become recharge. Anything less than field capacity will be calculated as actual evapotranspiration at the rate of PET for that month until it reaches wilting point. When soil water is less than total soil storage and greater than field capacity,black plastic planting pots soil water greater than field capacity equals recharge. If recharge is greater than bedrock permeability , then recharge = K and excess becomes runoff, else it will recharge at K until field capacity.

Runoff and recharge combine to calculate basin discharge, and actual evapotranspiration is subtracted from PET to calculate climate water deficit.The BCM can be used to identify locations and climatic conditions that generate excess water by quantifying the amount of water available either as runoff generated throughout a basin, or as in‐place recharge . Because of the grid‐based, simplified nature of the model, with no routing of runoff to downstream cells, long time series for very large areas can be simulated easily. However, if local unimpaired stream flow is available, estimated recharge and runoff for each grid cell can be used to calculate basin discharge that can be extrapolated through time for varying climates. In addition, the application of the model across landscapes allows for grid‐based comparisons between different areas. Because of the modular and mechanistic approach used by the BCM, it is flexible with respect to incorporating new input data or updating of algorithms should better calculations be derived. A flow chart indicating all input files necessary to operate the BCM, and the output files resulting from the simulations, is shown in Appendix A. After running the BCM, the 14 climate and hydrologic variables were produced in raster format for every month of every year modeled . To evaluate hydrologic response to climate for all basins in hydrologic California, we used the BCM to calculate hydrologic conditions across the landscape for 1971–2000 and to project them for the two GCMs and two emission scenarios for 2001–2100. Trends in climate, hydrologic derivatives of runoff and recharge, and climatic water deficit are separately analyzed for both historical‐to‐baseline, and baseline‐to‐future time periods . Although recharge and runoff were calculated for every grid‐cell and summarized as totals for basins, the estimate of basin discharge as a time series requires a further calculation of stream flow. Calculation of stream flow uses a series of equations that can be calibrated with coefficients from existing streamgage data, that then permit estimation of basin discharge for time periods when there are no stream flow measurements. We calculated basin discharge for each of 138 basins for which we also obtained streamgage data, and used the 138 streamgage datasets for calibration and validation.

The regional BCM developed for the southwest United States was applied to California following regional calibrations for solar radiation, PET, snow cover, , and groundwater . The California calibration is based on study areas with ongoing studies that were designed to provide runoff and recharge for historic, baseline, and future climatic conditions. Generally the watersheds used for calibration basins were identified on the basis of lack of impairments, such as urbanization, agriculture, reservoirs, or diversions, although this was not always possible.We used 68 basins for which bedrock permeability was iteratively changed to optimize the match between calculated basin discharge and measured stream flow. Calibration basins represent 9 of the 14 dominant geologic types in California, and have been calibrated to bedrock permeability on the basis of mapped geology for California . The BCM performs no routing of stream flow, which is done as post‐processing to produce total basin discharge for any basin outlet or pour point of interest, such as streamgages or reservoirs. The 68 calibration basins were calibrated to optimize the match between BCM‐derived discharge and stream flow by iteratively adjusting the bedrock permeability corresponding to the geologic types located within the basins to alter the proportion of excess water that becomes recharge or runoff. This part of the calibration process is followed by accounting for stream channel gains and losses to calculate basin discharge, optimize the fit between total measured volume and simulated volume for the period of record for each gage, and maintain a mass balance among stream flow and BCM recharge and runoff.For comparison to the calibration basins, and to evaluate model performance representing the state, additional validation basins were identified for the calculation of discharge on the basis of general lack of impairments, as well as statewide coverage of landscapes and geology. Hydrologic results for these basins were developed on the basis of the calibration to bedrock permeability performed using the calibration basins. The calibrations and validation basins are distributed across the range of elevation, aridity , and bedrock permeability, in comparison to all basins in California , and we also show the relationship between them for the same three environmental conditions . Study basins generally cover the range of elevations for the state .

Bedrock permeability as a representation of geology is dominated by lower permeability basins because very high permeability basins, such as those with alluvial valley fill, do not generate stream flow .The range of climates in the state, represented by the UNESCO Arid Zone Research program aridity categories , is covered less well by the study basins and neglects the hyper‐arid and arid locations due to lack of stream flow data . The representation of study basins within the ecoregions in the state also reflects the lack of streamgage data in the desert areas, as well as in the eastern side of the Sierra Nevada, and in the deep soils of the Central Valley , where any gaged streams are very impaired.Calibration statistics are shown in Appendix C and spatially in Figure 6, with the linear regression r2 for monthly and yearly comparison of measured and simulated basin discharge, and the Nash‐Sutcliffe efficiency statistic calculated as 1 minus the ratio of the mean square error to the variance. The NSS is widely used to evaluate the performance of hydrologic models, generally being sensitive to differences in the observed and modeled simulated means and variances, but is overly sensitive to extreme values, similarly to r2 . The NSS ranges from negative infinity to 1, with higher values indicating better agreement. Average calibration statistics for all basins are NSS = 0.65, monthly r2 = 0.70, and yearly r2 = 0.86.In our study, calibration basins have a mean NSS of 0.71 , with the higher values for the Russian River basin, just north of the San Francisco Bay Area,drainage planter pot and lower values for the Santa Cruz basins, just south of the Bay Area, where there are many urban impacts . There are several cases where urbanization and agriculture were identified as factors resulting in the inability to calculate a mass balance. The measured stream flow at Aptos Creek at Aptos had very high peaks that were not reproduced by the BCM. This basin is dominated by urbanization, suggesting that the high peak flows were a result of urban landscapes enhancing runoff, both during precipitation events where there is reduced infiltration and during the summer when urban runoff is enhanced—neither of which is taken into account in the BCM. In order to match measured volumes and stream flow patterns, the runoff is reduced by 80 percent, and the recharge is reduced by 50 percent. An example of diversions and groundwater pumping for public use can be seen in the difference between the Merced River at Happy Isles, upstream of Yosemite Village, and the Merced River at Pohono, downstream of Yosemite Village, where the percentage of runoff is reduced to 45 percent to match measured flows .

The basin discharge for the validation basins, not used for calibration, was developed using the adjusted bedrock permeability values developed during calibration. The mean NSS for these basins is 0.61 , with the upper Klamath and small basins in the Modoc Plateau volcanics performing the poorest . This is likely due to the large groundwater reservoir in the volcanics that has very long travel times from precipitation input to outflow in streams. An example of a calibration in the volcanics for the Sprague River basin illustrates the large base flow component with high base flow exponent . The Sprague River basin also has a large agricultural component and return flows, so the attempt to maintain a match in volumes results in an overestimate of the peak flows. The presence of a groundwater reservoir also shows in the differences between the r2 values for the monthly and yearly values , which identifies lags in the monthly calibration between measured and simulated discharge that are negated when calculated yearly. There is a large difference for the Kings River above the North Fork near Trimmer, for example, indicating the potential for a lag in groundwater flows becoming base flows that appear at the base of the basin and not being accounted for in a monthly model; whereas, the yearly r2 is very high. The basins in the volcanics consistently show a larger range in the two r2 values, which is also illustrated in the Sprague River near the Beatty, Oregon, calibration by the mismatch in the timing of the peaks. For California, we produced 270 m grids to represent historic and future climates from 1900 to 2100, resulting in 6,594,862 grid cells statewide, and a map for each of the 14 variables for each month. For the historic data and four future scenarios, this produced over 11 terabytes of data. We then created water year summaries of the 14 variables. The water year starts in October and ends in September. For the two temperature variables we averaged the temperature over the water year, and for the other 12 variables we summed all data for 12 months Since retaining yearly values for this region results in unwieldy large files, we reduced the data size for distribution and analysis to 30‐year summaries, providing monthly average values for variables historically for 1911–1940, 1941–1970, and 1971–2000. Future climate values are based on 100‐year simulations, with 2010–2039, 2040–2069, and 2070–2099 time slices produced. We also developed summaries for 10‐year periods based on time slices starting with 1911–1920 and running through 2090‐2099. Appendix D has a list of all available variables, file size, format, and acronym. We wrote a program to summarize the 30‐year datasets by various statistical measures, to create a manageable dataset for analysis of long‐term trends. We calculated these statistics for both annual and monthly average values. Statistics were developed for each 30‐year time period by applying a linear regression model to the input rasters, which produced the seven statistics for each variable for each 30‐year time period. The linear regression model used equations from Zar . Change over the historical baseline period 1971–2000 was described as the slope of the regression model multiplied by 30 years. We characterized the variables calculated by the BCM for watersheds and for ecoregions, and compared historical summaries and patterns to future projections.

New Chinatown still exists as a tourist attraction and remains a center of local Chinese American life

Representations of Chinatown defined the cultural possibilities of citizenship for Chinese Americans in the same way the law defined the possibilities of legal citizenship. During the Chinese Exclusion Act era , there remained real political and material stakes to the way Chinatown was popularly portrayed. For at least half a century, media elite and leaders in Los Angeles had portrayed Old Chinatown as a site of tong violence, illicit drug use, and prostitution. These stereotypes of Chinatown were rooted not just in ideas of race, but also in perceived differences of gender and sexuality. Images of vice and corruption were a direct result of popular representations that depicted Chinatown as a community of bachelors living together in an all male social world. The few women in the community were usually portrayed as prostitutes. Thus, Chinatown was popularly linked with a deviant form of sexuality that challenged the normative ideas of the white middle class family united in Christian marriage.3 Furthermore, many white residents of Los Angeles believed that the built environment of the Chinatown contributed to this vice. Stories of an underground network of lairs and secret tunnels facilitated the idea that Chinatown lay outside the vision and control of white authorities. New Chinatown in Los Angeles built on prior efforts by the Chinese American merchant class throughout North America to redefine the place of Chinatown in the popular imagination. Beginning with the Chinese Village at the 1893 World’s Columbian Exposition, continuing on through the reconstruction of San Francisco’s Chinatown following the 1906 Earthquake and fire, Chinese American merchants challenged notions of Chinatowns as disease-ridden slums and refashioned them into spaces of commerce that catered to white tourists. 4 During this time period, Chinese American merchants served as cultural brokers, whose position between white tourists and the vast majority of working-class Chinese Americans allowed them to consciously transform these segregated ethnic communities into sites that presented their own vision of Asia to the outside world. This was done in a way that challenged notions of Chinatowns as manifestations of Yellow Peril while monetizing these sites in a way that allowed Chinese American entrepreneurs to make a living.

In New Chinatown,plant pot with drainage local Chinese Americans merchants took concepts pioneered in San Francisco’s Chinatown and in world’s fair expositions and saw them through to their logical end. In fact, New Chinatown was not a neighborhood at all but a corporation, the stock of which was privately held by a select group in the city’s emerging Chinese American middle class.5 These merchants and restaurant owners maintained complete control over their new Chinatown. From the land on which the business district was built, to the architectural style that accompanied the area’s businesses, to the advertisements that publicized the district in the city’s papers, New Chinatown reflected the desires of its owners to both attract tourist and to challenge the conceptions that had come to dominate Old Chinatown. The opening day festivities of New Chinatown featured appearances by local Chinese American actors who had made a name for themselves in the China-themed films of the 1930s.6 Following the Japanese invasion of Manchuria in 1931, Hollywood began producing a series of Chinese-themed films many of which featured Chinese American performers from the Los Angeles area. The most high profile of these films was MGM’s The Good Earth , a film based on Pearl S. Buck award winning 1931 novel. Present at the opening of New Chinatown were Keye Luke and Soo Yung, Chinese American actors with supporting roles in The Good Earth. Also present was Anna May Wong, the most recognizable Chinese American star of the period. Despite being passed over for a role in The Good Earth, Wong had already appeared in number of high profile films including Thief of Bagdad , Piccadilly , and The Shanghai Express . New Chinatown would soon feature a willow tree dedicated to Ms. Wong. To complete the Hollywood connection, the New Chinatown opening featured an art exhibit by Tyrus Wong, a Hollywood animator who would later work on the classic animated film, Bambi . Despite these connections to Hollywood, in many ways New Chinatown attempted to cast itself as the modern Chinese American alternative to the representation of China seen in films like The Good Earth. The opening gala included flags for both the Republic of China and the United States spread around district.

The parade featured four-hundred members of the Federation of Chinese Clubs, local Chinese American youth, most of whom were American-born who had banded together to raise financial support for China following the outbreak of the SinoJapanese War in 1937.7 At the same time, a number of prominent state and local officials participated in the festivities including Governor Merriam who was then locked in a difficult reelection campaign and who hoped that his participation could would solidify the small but not insignificant Chinese American vote. In these complex and hybrid ways, the founders positioned New Chinatown as a distinctly Chinese American business district, one that reflected the increasingly U.S.-born demographics of the nation’s Chinese American community. New Chinatown was not the only Chinatown to open in Los Angeles in the summer of 1938. Two weekends earlier, less than a mile away, a group of white business leaders headed by philanthropist Christine Sterling had opened their own competing Chinatown, which they dubbed China City.8 If New Chinatown was defined by the ethos of the American-born generation, China City was defined by Hollywood. This was to be a Chinatown that embodied the images that film audiences saw when they entered the theaters to watch Chinese and Chinatown themed films so popular in the 1930s. New Chinatown may have drawn on Hollywood actors to publicize its existence, but China City in many senses was a Hollywood production. Like New Chinatown, this was a business district not a neighborhood, but unlike New Chinatown, China City adhered much more closely to the Orientalist images of China produced by Hollywood cinema. In China City visitors could attend The Bamboo Theater featuring continuously running films about China. They walk through a recreation of the set for the House of Wang from The Good Earth. Many of the Chinese Americans employed in China City had also worked as extras on the MGM film.

And so tourists might encounter some of the very people that had seen in the background shots of the film. In China City, tourists could pay to be drawn around by rickshaw. According to the Los Angeles Times, visitors to China City could purchase “coolie hats, fans, idols, miniature temples, and images.”9 One of the shops was owned by Tom Gubbins,pots with drainage holes a local resident of Chinatown who supplied Hollywood with costumes and props for Chinese themed films and connected local residents with jobs as extras. In both New Chinatown and China City, Chinese Americans utilized Chinatown to mediate dominant ideas about race, gender, and nation.10 These two Chinatowns were more than physical sites for members of ethnic enclave to make a living. They also represented the apparatus through which the local Chinese American community performed their own cultural representations of China and Chinese people to crowds of largely white visitors. In more ways than one, Chinese American performances in these two districts were the culmination of a fifty year process through which the Chinese American merchant class challenged Yellow Peril stereotypes by transforming China and Chinese culture into a nonthreatening commodity that could be sold to white tourists. Examining a period of national debate over immigration and U.S. citizenship, this dissertation, “Performing Chinatown: Hollywood Cinema, Tourism, and the Making of a Los Angeles Community, 1882-1943,” foregrounds the social, economic, and political contexts through which representations of Chinatown in Los Angeles were produced and consumed. Across five chapters the dissertation asks: To what extent did popular representations and economic opportunities in Hollywood inform life in Los Angeles Chinatown? How did Chinese Americans in Los Angeles create, negotiate, and critically engage representations of Chinatown? And in what ways were the rights of citizenship and national belonging related to popular representations of Chinatown? To answer these questions, the project examines four different “Chinatowns” in Los Angeles—Old Chinatown, New Chinatown, the MGM set for The Good Earth, and China City—between the passage of the Chinese Exclusion Act in 1882 and its repeal in 1943 during the Second World War. The relationship between film and Chinatown stretches back to the 1890s to a moment when both featured as “urban amusements” for a newly developing white urban public audience in places like New York, and yet the connection between Chinatown and film reached its nadir in Los Angeles in the 1930s during the height of the Hollywood studio system.

San Francisco and New York Chinatown may have been larger in size and attracted more tourists, but Los Angeles Chinatown and the Chinese American residents of the city played a more influential role in defining Hollywood representations of China and Chinese people than any other community in the United States. Long before the outbreak of World War II, the residents of Los Angeles Chinatown developed a distinct relationship to the American film industry, one that was not replicated anywhere else during this period. Despite this distinct relationship, there have been no dissertations or academic books published about Los Angeles Chinatown and its relationship to Hollywood cinema. Asian American historians who work on Los Angeles have for the most part focused on the city’s Japanese American population. 11 Sociologists of the region have focused on Asian Americans in the ethnoburbs of the San Gabriel Valley.12 Film studies scholars who examine Asian American representations have focused primarily on the films themselves or else on writing biographies of a few well-known Hollywood performers such as Anna May Wong, Philip Ahn and Sessue Hayakawa. 13 With professional academics focused on different but related topics, nearly all of the research that has been done on the history of Chinese Americans in Los Angeles and their relationship to Hollywood film has been completed by community historians at organization like the Chinese Historical Society of Southern California and the Chinese American Museum of Los Angeles. 14 Most of these community historians are volunteers who research and write because of their passion for the subject matter. Many also have family ties to this history. This familial link is the case with the most popular retelling of this history, Lisa See’s novel Shanghai Girls. Lisa See is a descendant of the Chinese Americans who lived in Los Angeles before World War II. 15 In contrast, professional academics for their part have all but ignored this history. What accounts for the relative absence of scholarship on the relationship between the Chinese American community of Los Angeles and the Hollywood film industry? Certainly, the topic of Chinatown remains one of the most thoroughly studied aspects of the Asian American experience. Alongside scholarship examining the political and legal apparatuses used to exclude Asian people from the US, Chinatown is one of the few topics in Asian American studies that elicited significant scholarly consideration before the birth of the field in the late 1960s.16 More than a dozen monographs have been produced examining various aspects of Chinatowns from the fields of sociology and history. In the popular realm, interests in Chinatown as a site of tourism and as a cultural representation also remains strong. In addition to the long-standing interest in Chinatown as an academic topic, the material traces of this history remain highly visible. Films like Shanghai Express , Lost Horizon and The Good Earth , which all employed Chinese American background performers, are available for home viewing. Photographs from Chinatown performances of this period including those of the Mei Wah Drum Corps have been digitized and are available on-line through archives such as those of the Los Angeles Public Library and their Shades of L.A. project. And yet, the distinct theoretical, methodological, and disciplinary tenants of sociology social history, and film studies have limited the types of questions scholars have asked about Chinatown and film, and by extension the types of conclusions these scholars have drawn.

Diabetic treatment under the new initiative had objectives similar to those of the asthma component

Utilizing the collaborative technique enabled the primary care practice teams to make many changes in the way they cared for patients with chronic illness. It was concluded that the evidence suggested improvements in patient outcomes resulted from this intervention.Subsequent to the late 1990s, more evidence in support of the model appeared. Due to the general popularity of the model, in 2001 ICIC’s three-year Targeted Research Grants Program provided funding for peer-reviewed, applied research that focused on addressing critical questions about the organization and delivery of chronic illness care within health systems. Nineteen projects were selected, providing grants totaling approximately $6 million dollars backed by the Robert Wood Johnson Foundation. The research included evaluations of interventions such as group visits or care managers, observational studies of effective practices, and the development of new measures of chronic care. The settings for these studies were primarily community or private health care. Identifying the types of organizations that fare better at improving outcomes for particular disease states continues to be a question for the literature . The not-for-profit and private sectors continue to embrace the CCM, and organizations like the ICIC continue to devote resources to its development and ability to improve on patient health outcomes. In 2001, the Institute of Medicine published what is now considered a seminal report in the field: Crossing the Quality Chasm: A New Health System for the 21st Century . In the report, the Institute of Medicine outlines six goals for the transformation of health care in the United States. The report specifically references the work of ICIC and calls upon lawmakers at the federal level to make chronic disease care quality improvement a priority issue. Following suit,vertical plant growing the National Committee on Quality Assurance and the Joint Commission, two nationally recognized not-for-profit entities that set standards for care in the United States, developed accreditation and certification programs for chronic disease management based on CCM .

At the same time, both the Joint Commission and the National Committee on Quality Assurance have released additional accreditations in the patient centeredness approach of the patient centered medical home. These new certifications continue those proposed by CCM and advance the work of these pioneers. Joint Commission’s Primary Care Medical Home looks at organizations that provide primary care medical services and bases its certification on elements that enable coordination of care and increase patient self-management. This is a model of care based directly on the foundational work provided by CCM. Additionally, CCM currently serves as a foundation for new models of primary care asserted by the American College of Physicians and the American Academy of Family Practice. In 2003, the ICIC program administrators convened a small panel of chronic care expert advisors and updated CCM to reflect advances in the field of chronic care from both the research literature and from the experiences of health care systems that implemented the model in their quality improvement activities. These programs were phased in during early June 2009. The asthma component sought to improve asthma care . Additionally, it had the objective of improving asthma outcomes .The objectives of the diabetes component of the program differed from the asthma module in that the program did not focus on the reduction of diabetes related deaths. Practice reviews did not identify diabetics as having an abnormally high mortality rate; however, improvements were sought in the numbers of hospitalizations and specialist treatment visits. While both chronic care conditions were intriguing areas of study for the program’s implementation, this paper focuses on the diabetic portion of the implementation because the earlier phase of asthmatic treatment did not result in sufficient data to enable proper analysis. During the preparatory stage of the Chronic Care Initiative , a not-for-profit consulting organization with correctional health care and learning collaborative experience was selected to assist the California Prison Health Care Services project team.

A statewide system assessment was conducted between January and April in 2008. Given the small window of opportunity under the federal receivership to accomplish the turnaround plan of action’s objectives, a very aggressive work plan and timeline was developed. To develop the work plan and identify potential problem areas, the team first established a list of limiting factors relevant to the operational environment. It was believed that in developing this list, the institutionalized nature of the organization and its key players could be catalogued. The factors could be utilized to address areas in which proactive focus and intervention efforts would be required in order to enable successful change on the part of long-tenured civil servants. The long-tenured employees were not capable of seeing all the flaws of their own routinized behavior because they had known no other ways. The theory under which the team operated was adopted from the above and related research on organizational change. Fernandez and Rainey discuss managing change once the change plan has been implemented and tasks are underway. To be innovative, the CCI team sought ways to stay ahead of the change curve and thus looked to capture variables of interest related to places where proposed change could get stuck by administrators unable to see how their usual behaviors and actions prevented successful change management. As a result, the plan that was developed included tasks specific to the implementation of the chronic care model in the health care setting. The team, in its proactive approach to implementation, identified aspects of organizational behavior that were important to track on the management side and designed methods to track and trend this behavior. Once tracked and trended, these data were used to develop interventions on managers to motivate their behavior in ways the team felt would enable the long-term success and sustainability of the changes at hand. Further, the catalog of behavior or aspects within the environment that were known to have likely deleterious effects on the proposed changes was used to redevelop the private sector chronic care model itself.

Revisions to the private-sector version of the chronic care model were necessary to fit the model for a custodial setting. With health care needs put behind those of security, the program architects found it necessary to modify and enhance aspects of the elements of the model. The first and perhaps least profound change was to the name of the program—to “Chronic Disease Management Program”—to avoid the perception that the inmate population would receive levels of care provision higher than enjoyed by the community at large because the program actually aimed to achieve a reduction in the cost of care while maintaining clinical efficacy of delivery and treatment. As a solely political move, it set the stage for the requirements of alterations to the rest of the model. Subsequent to discussions concerning the program’s name, each of the model’s standard elements were analyzed and repacked to fit the correctional environment. Due to the lack of learning collaborative and quality improvement information in the correctional health care literature, an innovative two-phase approach to implementation was developed. Phase 1 focused on piloting the learning collaborative strategy, developing a modified diabetes-change package for a correctional environment, and establishing the pilot sites to test the model. Phase 2 had the objectives of statewide implementation of the tested and approved approach from the pilot, while additionally moving on to the next chronic condition for the initial six pilot sites. After identifying the pilot sites,vertical farming the initiative began with intensive, multidisciplinary work sessions. Subsequent work sessions were performed using an enhanced learning collaborative strategy. Collaborative sessions were planned quarterly for the first year with teams from different sites attending four, two-day learning sessions separated by action periods. An intensive skills-based course on quality improvement was embedded into the learning sessions. Additionally, virtual learning workshops were inserted between the learning sessions to enable each collaborative to build workforce competencies on quality improvement technical skills. At the end of the learning sessions, pilot site teams folded into three regional learning collaboratives involving all 33 prisons to commence Phase 2 activities. The pilot-site champions served as presenters or mentors to the new sites during Phase 2, in a “train-the-trainer” approach. This approach required an initial round of training, and those trained during the first round were then deployed to train the rest of the staff. Figure 3 shows the culturally embedded barriers to implement CCM, as determined by the team. These obstacles are described in greater detail in the following section. They represent the targeted aspects of the model, which, due to their private-sector beginnings, would not fit into the custodial setting without modification. The re-adaptation of the model to fit the public sector, and more specifically the custodial environment within a public agency, was designed over several months, and its output was the subject of lively debate. The price for the program’s implementation failure was greater than the sum of its investment of time and resources. As many of the receiver-level clinical managers were brought into the receivership organization as employees of a new entity, results were expected. Because those expectations for results were high, the preparation for program implementation was carefully planned. It was understood that the receiver’s efforts were focused on remolding institutionalized patterns of action.

Initial efforts began with the breaking down of the six CCM elements into digestible tasks and deliverables within a project plan. A discussion then ensued concerning the parts of CCM that would not fit into the existing organization due to cultural barriers. Part of the debate mentioned earlier included the discussion among administrative staff with extensive CDCR experience, which provided insight about the barriers to a successful implementation in the custodial setting.A successful adoption of CCM is dependent on the visible support at all levels of the health care organization, starting with the senior managers. The federal receivership was established to provide the highest level of executive leadership support. The fiscal constraints of the state of California during the period of time when the program was implemented precluded the full adoption of CCM in the prison health care system. Clinical management that would otherwise have been dedicated to the coordinated care team was reduced. To increase managers’ visibility in relation to this program, attention was placed on coordination of care activities. This occurred at all levels, with headquarters-based administrative staff taking the lead in establishing the importance of the program by providing in-service trainings as well as on-site follow up support. In support of learning collaboratives, clinical administrators and supervisory staff were brought to headquarters facilities to participate in interactive sessions. It was felt that the overall effect of change in organizational behavior would occur once staff worked in collaboration to define new processes. To create visible leaders, managers had a role in shaping CCM implementation in a manner that was personally meaningful to them and would thus empower them. As the prison health care system is a single-payer, closed health care system, the potential to adopt evidence-based quality improvement strategies and practice guidelines is somewhat greater than would be the case in other health delivery settings. Because staff in a closed system is labor internal to the organization, the establishment of guidelines for these staff is an enabling factor for the full adoption of CCM policies with accountability for adherence to the model and results. The extent to which continuous, internally based labor learns and buys into the new policies and procedures equates to the extent to which sustainability of new methods can be achieved. In open health-delivery systems, clinical staff members are treated more as vendors than as internal staff. Because vendor relationships are managed differently than internal staff are, adherence to internal policies and procedures is more difficult to achieve. Some prisons institutionalized the use of temporary staff due to the relative ease with which these labor resources can be procured. Though temporary personnel cost was typically one and a half to two times the expense of a full-time employee, given the remote location of some facilities temporary staffing was preferred. This practice became institutionalized; as a supervising declared during interviews, “it was just the thing to do, because who has time to recruit and interview when using [temporary staff] was what everyone did.” She went on to note that “we certainly planned our staffing needs and secured the positions but look where we are . . . doctors can go [to the institution literally next door] and earn almost 25 percent more.

The connections between equilibrium outcomes in these various scenarios are established

In contrast, when the intensity of the contest among sites is beyond a certain threshold, competition tends to rapidly increase the fragmentation of published news: as the number of competing publishers increases, more and more a priori unlikely topics are reported resulting in a large diversity of published topics . This result is reminiscent of the emergence of “funny lists” and “heartwarming videos” in the news mentioned by the Financial Times . Our analysis extends to pure- and mixed-strategies and distinguishes between cases with small or large number of competing publishers.Next, in a model with firm asymmetries, we find that when some firms have better technology to forecast the popularity of topics, then, surprisingly, the overall diversity of news published by the remaining firms declines as these firms tend to take refuge in publishing ‘safer’ topics. When a subset of firms have extra revenue from a published ‘hit’ from loyal users then these ‘branded’ publishers tend to be conservative in their choice of topics as their loyal customer base represents ‘insurance’ against the contest. In contrast, the diversity of news published by unbranded outlets increases as unbranded publishers tend to avoid branded ones by putting more weight on a priori unlikely stories. These results are consistent with anecdotal evidence in the news industry where traditional news outlets are more conservative in their reporting whereas new entrants do not shy away from controversial stories. The findings are also conform to the broadly observed increase of diversity in the public agenda by communication theorists.In a final analysis,vertical planting tower we consider endogenous success probabilities. It is widely accepted that the media often ‘makes the news’ in the sense that a topic may become relevant simply because it got published. Interestingly, such a dynamic has an ambiguous effect on the diversity of published topics. If the contest is very strong then it results in a concentrated set of a priori likely topics. When the contest is moderate then the diversity of topics may be higher depending on the number of competing outlets.

The article is organized as follows. In the next section, we summarize the relevant literature. This is followed by the description of the basic model and its analysis where we first present a variety of results concerning symmetric competitors. Next, we extend the model to explore the impact of asymmetries across firms. Our last extension considers the case of endogenous success probabilities. The article ends with a discussion of the results and their applicability to other contexts. To facilitate reading, all proofs are relegated to the Appendix. The topic of this article is generally related to the literature on agenda setting for an excellent recent review that studies the role of media in focusing the public on certain topics instead of others. It is broadly believed that agenda setting has a greater influence on the public than published opinion whose explicit purpose is to influence the readers’ perspective. As the famous saying by Bernard Cohen goes: “The media may not be successful in telling people what to think but they are stunningly successful in telling their audiences what to think about”. The literature examines the mechanisms that lead to the emergence of topics and the diversity of topics across media outlets. In particular, McCombs and Zhu show that the general diversity of topics as well as their volatility has been steadily increasing over time. The general focus of our article is similar: we show that the nature of competition is an important mechanism affecting the diversity of public agenda. Agenda setting is also addressed in the literature studying the political economy of mass media for an excellent review.The standard theory states that media coverage is higher for topics that are of interest for larger groups, with larger advertising potential, and when the topic is journalistically more “newsworthy” and cheaper to distribute. Although there is little empirical evidence to support , the other hypotheses are generally supported and Snyder and Stromberg , among others. Hypotheses is particularly interesting from our standpoint. Eisensee and Stromberg show that the demand for topics can vary substantially over time. For example, sensational topics of general interest may crowd out other ‘important’ topics that would be covered otherwise. This supports the general notion that media needs to constantly forecast the likely success of topics and select among them accordingly.

Our main interest is different from this literature’s as we primarily focus on media competition as opposed to what causes variations in demand. Taking the demand as given, our goal is to understand how the competitive forces between media firms influences the selection and diversity of topics, which then has a major impact on the public agenda. As such, the article also relates to the literature on media competition where strategic behavior influences product variety. Early thoretical work by Steiner and Beebe on the “genre” selection of broadcasters explains cases of insufficient variety provision in an oligopoly. Interestingly, they show that although certain situations lead to the duplication of popular genres , other scenarios may lead to a “lowest common denominator” outcome where no consumer’s first choice of genre is ever served. A good discussion of these models and their extensions can be found in Anderson and Waldfogel . Our work is different from this literature in two important ways: we do not have consumer heterogeneity and, we do not rely on barriers to entry to explain limited variety. In fact, we study variety precisely when these factors’ importance is greatly diminished. On the empirical side, research on competition primarily focuses on how media concentration affects the diversity of news both in terms of the issues discussed in the media as well as the diversity of opinion on a particular issue. For example, George and Oberholzer-Gee show that in local broadcast news, “issue diversity” grows with increased competition even though political diversity tends to decrease. Franceschelli studies the impact of the Internet on news coverage, in particular the recent decrease in the lead-time for catching up with missed breaking news. He argues that missing the breaking news has less impact, as the news outlet can catch up with rivals in less time. This might lead to a free-riding effect among media outlets, where there is less incentive to identify the breaking news. Both of these articles have consistent empirical findings with our results/assumptions. In terms of the analytical model, we rely on the literature studying competitive contests among forecasters. For example, Ottaviani and Sorensen use a similar framework to model competition among financial analysts. Our model is different in that we explore in more detail the structure of the state space, we generalize the contest model by considering all possible prize-sharing structures and extend it in a variety of ways, most notably by analyzing asymmetries across players. This article studies competition among news providers who compete in a contest to publish on a relatively small number of topics from a large set when these topics’ prior success probabilities differ and when their success may be correlated.

We show that the competitive dynamic generated by a strong enough contest causes firms to publish ‘isolated’ topics with relatively small prior success probabilities. The stronger the competition , the more diverse the published news is likely to be. Applied to the context of today’s news markets characterized by increased competition between firms,vertical hydroponic farming new entrants and reduced customer loyalty, we expect a more diverse set of topics covered by the news industry. Although direct evidence is scarce, there seems to be strong empirical support for the general notion that the public agenda has become more diverse over time while also exhibiting more volatility McCombs . This general finding is consistent with our results. Although diversity of news may generally be considered a good thing, agenda setting, i.e. focusing the public on a few, worthy topics maybe impaired by increased competition. In a next step, we explore differences across news providers and find that branded outlets with a loyal customer base are likely to be conservative with their choice of reporting in the sense that they report news that is a priori agreed to be important. Facing new competitors with better forecasting ability also makes traditional media more conservative. In sum, if the public considers traditional media and not the new entrants as the key players in agenda setting, then increased competition may actually make for a more concentrated set of a priori important topics on the agenda. It is not clear however, that traditional news outlets can maintain forever their privileged status in this regard. Some new entrants have managed to build a relatively strong ‘voice’ over the last few years. We also explore what happens when the success of news is endogenous, i.e. if the act of publishing a topic ends up increasing its likely success. Interestingly, we find that an excessively strong contest tends to concentrate reporting on topics with the highest a priori success probabilities. We also find that the number of competitors has a somewhat ambiguous effect on the outcome. If there are too few or too many competing firms then, again agenda setting tends to remain conservative in the sense of focusing on the a priori likely topics. These results also resonate to anecdotal evidence concerning today’s industry dynamics. Our analysis did not consider social welfare. This is hard to do as it is not clear how one measures consumer surplus in the context of news. Indeed, the model is silent as to what is consumers’ utility when it comes to the diversity of news. Although policy makers generally consider the diversity of news as a desirable outcome, a view that often guides policy and regulatory choices, it is not entirely clear that, beyond a certain threshold, more diversity is always good for consumers. As mentioned in the introduction, the media does have an agenda setting role and it is hard to argue that every topic equally represented in the news is a useful agenda to coordinate collective social decisions . Nevertheless, our goal was to identify the competitive forces that may play a role in determining the diversity of news in today’s environment increasingly dominated by social media. Our analysis indicates that these forces do not necessarily have a straightforward impact on diversity. The generalized contest model presented has implications for other economic situations that may be well-described by contests. In this sense, our most relevant results are those that describe the outcome as a function of the reward-sharing patterns across winners. Indeed, we characterize all such patterns with a simple parameter, r, and show that depending on r there are only three qualitatively different outcomes leading to vastly different firm behaviors. Different r’s may characterise different contexts. For our case a finite, albeit varying r seemed appropriate and r = is less likely. In the case of a contest describing R&D competition, r = is quite plausible. Conversely, the case of r = 0 may well apply to contests among forecasters whose reward might be linked more closely to actually forecasting the event and less to how many other forecasters managed to do so. Our analysis of the case with a small number of firms may also be useful in particular situations ; we show that this case is tractable and shares many characteristics with the case involving many players. An important insight from our analysis is that contest models need to be carefully adjusted to the particular situations studied. Our framework can be extended in a number of directions. So far, we assumed a static model, one where repeated contests are entirely independent. One could also study the industry with repeated contests between media firms where an assumption is made on how success in a period may influence the reward or the predictive power of a medium in the next period. A similar, setup is studied with a Markovian model by Ofek and Sarvary to describeindustry evolution for the hi-tech product categories. Finally, our article generated a number of hypotheses that would be interesting to verify in future empirical research.

Three RCTs and two non-RCTs reported on culturally sensitive or culturally adapted interventions

All studies employed the same widely used and validated screening instrument, the Patient Health Questionnaire-9 , to determine baseline depression diagnosis. However, there was wide variability in the measures used to define study outcomes. To determine depressive symptom improvement, six of nine studies used the PHQ-9, two studies used the Hopkins Symptom Checklist Depression Scale , and one study used the Hamilton Rating Scale for Depression , the Clinical Global Impression Severity Scale , and the Clinical Global Impression Improvement Scale . One study reported that researchers used their own translation of the PHQ-9 to Chinese, which had been validated in a prior study.Other studies did not specify whether they used validated translations or translated their own instruments.All studies adequately described the interventions and the control conditions .Two studies reported post-intervention follow-up and included outcomes a year after the intervention had ended .Not all studies reported how frequently care managers contacted patients in the intervention group during follow-up .The mean age ranged from 34.8 to 57 years across studies, and 1166 of 4859 participants were male. Among the nine studies, 2679 participants had LEP. Most studies focused on Latino immigrants living in the United States , with Spanish as the preferred language3; only two studies included Chinese and Vietnamese immigrants. The majority of LEP participants spoke Spanish. One hundred and ninety-five patients with LEP spoke Mandarin, Cantonese, or Vietnamese . Two studies had poor characterization of participant languages, noting that many spoke BAsian languages,^ and citing only clinic language demographics . In two studies reporting that patients preferred a non-English language,vertical garden indoor the degree of English language proficiency was not described. Three-quarters of participants were recruited from general primary care and had a variety of medical conditions. Other participants were recruited into the studies for specific comorbidities .While intervention details were not always fully described, eight of nine studies employed bilingual care managers for the delivery of care in the collaborative care model.

The ninth study did not explicitly mention how the intervention was delivered to patients with LEP.No studies reported on the use of interpreters. These five studies explicitly tailored their interventions to different cultural groups. The two RCTs and one non-RCT serving Spanish-speaking patients, all conducted by the same research group, culturally tailored the collaborative care model by adapting the intervention materials for literacy and for idiomatic and cultural content. They further included cultural competency training for staff and employed bilingual staff to conduct the intervention.The remaining studies mentioned adding a cultural component to the collaborative care model with the goal of serving Asian immigrants with traditional beliefs about mental illness. One study further adapted the psychiatric assessment for cultural sensitivity.Four of five RCTs reported on change in depressive symptoms; none reported outcomes by preferred language group. Three RCTs reported that the proportion of patients who experienced a ≥ 50% reduction in depressive symptoms score was 13% to 25% greater in the intervention arm than in usual care .The last RCT, Yeung et al., reported no statistically significant difference between treatment groups at 6 months44; however, the investigators noted availability and high uptake of psychiatric services in both study arms . Three of these four RCTs included cultural tailoring of their interventions.Two RCTs reported on receipt of depression treatment and treatment preferences. In one RCT, 84% of patients treated in the collaborative care intervention received depression treatment , compared to only 33% of patients in the enhanced usual care arm, over 12 months of follow-up. Another RCT focused on depression treatment preferences.Using conjoint analysis preference surveys, this study found that patients preferred counseling or counseling plus medication over antidepressants alone, and that patients preferred treatment in primary care rather than in specialty mental health care. Patients in the collaborative care intervention group were much more likely to receive their preferred treatment at 16 weeks than were patients in usual care . However, this study also found that English speakers in both groups were more likely to receive their preferred treatment modality than their Spanish speaking counterparts .

One non-RCT study46 found that 49% and 48% of patients reported improved depressive symptoms at 6 and 12 months, respectively, among study participants treated with collaborative care. The two studies that reported outcomes by preferred language found significant differences between English- and Spanish-speaking patients. Bauer et al. found that Spanish language preference was associated with more rapid and greater overall improvement , when compared to English preference, despite not being associated with receipt of appropriate pharmacotherapy.Similarly, Sanchez et al. found that Spanish-speaking Hispanic patients had significantly greater odds of achieving clinically meaningful improvement in depressive symptoms at 3-month follow-up than did non-Hispanic whites .In contrast, Ratzliff et al. found similar treatment process and depression outcomes at 16 weeks among three groups treated with collaborative care: Asians treated at a culturally sensitive clinic, Asians treated at a general clinic, and whites treated at a general clinic .Furthermore, the study did not have a usual care control group to enable evaluation of the intervention.Despite the existence of effective treatment, depression care for patients with LEP is challenging for both patients and clinicians, and better models of care are needed. In a systematic review of the current literature on outpatient, primary care based collaborative care treatment of depression, we found that collaborative care delivered by bilingual providers was more effective than usual care in treating depressive symptoms among patients with LEP. The systematic review revealed important limitations in the current evidence base. The review was limited by the low number of studies , heterogeneity of study outcomes and definitions, and a lack of data on use of language access services. However, the randomized controlled studies were consistent in treatment effect size, as three of four high-quality RCTs found that 13%–25% more patients reported improved depressive symptoms when treated with collaborative care compared to usual care; the fourth had unusually high rates of treatment in the comparison arm and found no difference between groups.This is consistent with prior systematic reviews of collaborative care treatment.

Review of two cohort studies that reported outcomes by preferred language found similar-sized improvements as 10% and 27% more Spanish-speaking patients had improved depressive symptoms during 3 months of follow-up when treated with collaborative care, indicating that patients with LEP may benefit as much as, if not more than, English-speaking patients treated with collaborative care.In short, the collaborative care model—with its emphasis on regular screening, standardized metrics, validated instruments, proactive management, and individualized care, and when adapted for care of LEP patients with depression via the use of bilingual providers—appears to improve care for this patient population. Yet while the collaborative care model has performed well in research studies, many questions remain for wider implementation and dissemination in systems caring for patients with LEP. To help guide the dissemination of an effective model of collaborative care for patients with LEP, researchers will need to be more specific in detailing the language skills of participants and any cultural tailoring and adaptations made to the model to serve specific populations, as we found that race and ethnicity are often conflated with language in these studies,vertical garden indoor system and that preferred language and degree of English language proficiency is not always made explicit. Language barriers may increase the possibility of diagnostic assessment bias, diagnostic errors, and decreased engagement and retention in depression care.It is important to note that most studies employed bilingual staff; language concordance may be particularly important when dealing with mental health concerns, as it is associated with increased patient trust in providers, improved adherence to medications, and increased shared decision-making.Furthermore, the collaborative care model may have been addressing cultural barriers to care beyond linguistic barriers. While a few of the studies culturally adapted and modified their collaborative care model and their psychiatric assessments, these adaptations were not addressed in detail and may be difficult to replicate in other settings. Best practices for culturally adapting collaborative care for patients with LEP have yet to be defined. Further research is also needed to more rigorously ascertain the effectiveness of cultural versus linguistic tailoring on the effectiveness of collaborative care in LEP groups. Additionally, given the evidence that depression in racial and ethnic minorities and patients with LEP often goes unrecognized,efforts will be needed to make sure these groups are systematically screened for depressive symptoms and referred for care in culturally sensitive ways. One large implementation study in the state of Minnesota found a marked difference in enrollment into collaborative care by LEP status.Of those eligible for a non-research-oriented collaborative care model, only 18.2% of eligible LEP patients were enrolled over a 3-year period, compared to 47.2% of eligible English-speaking patients . Similarly, Asian patients were underrepresented in studies and likely in collaborative care programs. Yeung et al. reported that the majority of Chinese immigrants with depression were under-recognized and under treated in primary care, as evidenced by the fact that only 7% of patients who screened positive for depression were engaged in treatment in primary care clinics in Massachusetts.Referral processes for collaborative care may also need to be improved for patients with LEP.

The reasons for differences in enrollment by LEP status in collaborative care programs remain poorly elucidated and likely include patient-, provider-, and systems-based factors. However, these results suggest that without targeted efforts to screen, enroll, and engage patients with LEP, collaborative care models may only widen mental health disparities for such patients. Studies that examine implementation and sustainability of the collaborative care model are needed. This review has a number of limitations. We may have missed studies where language and participant origin were not adequately described. Additionally, as has been noted in prior systematic reviews of RCTs of collaborative care, participant and provider blinding would not have been feasible, due to the nature of the interventions.Other limitations include the variability in study duration and outcome assessment, making direct outcome comparison difficult. Finally, of the nine studies included in this review, five were conducted in Los Angeles, CA . This may limit the generalizability of our results.Circadian rhythms arise from genetically encoded molecular clocks that originate at the cellular level and operate with an intrinsic period of about a day . The timekeeping encoded by these self-sustained biological clocks persists in constant darkness but responds acutely to changes in daily environmental cues, like light, to keep internal clocks aligned with the external environment. Therefore, circadian rhythms are used to help organisms predict changes in their environment and temporally program regular changes in their behavior and physiology. The circadian clock in mammals is driven by several interlocked transcription-translation feedback loops. The integration of these interlocked loops is a complicated process that is orchestrated by a core feedback loop in which the heterodimeric transcription factor complex, CLOCK:BMAL1, promotes the transcription of its own repressors, Cryptochrome and Period as well as other clock-controlled genes . Notably, there is some redundancy in this system as paralogs of both PER and CRY proteins participate in the core TTFL. In general, these proteins accumulate in the cytoplasm, interact with one another, and recruit a kinase that is essential for the clock, Casein Kinase 1 δ/ε , eventually making their way into the nucleus as a large complex to repress CLOCK:BMAL1 transcriptional activity. Despite this relatively simple model for the core circadian feedback loop, there is growing evidence that different repressor complexes that exist throughout the evening may regulate CLOCK:BMAL1 in distinct ways. PER proteins are essential for the nucleation of large protein complexes that form early in the repressive phase by acting as stoichiometrically-limiting factors that are temporally regulated through oscillations in expression. As a consequence, circadian rhythms can be disrupted by constitutively over expressing PER proteins or established de novo with tunable periods through inducible regulation of PER oscillations. CK1δ/ε regulate PER abundance by controlling its degradation post-translationally; accordingly, mutations in the kinases or their phosphorylation sites on PER2 can induce large changes in circadian period, firmly establishing this regulatory mechanism as a central regulator of the mammalian circadian clock. CRY proteins bind directly to CLOCK:BMAL1 and mediate the interaction of PER-CK1δ/ε complexes with CLOCK:BMAL1 leading to phosphorylation of the transcription factor and its release from DNA as well as acting as direct repressors of CLOCK:BMAL1 activity by sequestering the transcriptional activation domain of BMAL1 from coactivators like CBP/p300.