Category Archives: Agriculture

Notions of context and demonstration have their own purchase in the field of technology adaptation and transfer

Like other emerging donors, Brazil follows the global standard of providing cooperation in the project format. As I will argue in Chapter 3, however, it does so through a mode of engagement that is more hands-off and based on demonstration, in contrast with traditional aid, based on more bureaucratized and large scale kinds of intervention. Here I will approach this question through an idiom closer to my relational analytics, that of robustness.At least since Ferguson’s insightful chapter on the politics of knowledge in World Bank reports in The Anti-Politics Machine, discourse lingered for a long while as a prevailing analytical angle in the anthropology of development , remaining important even after potent critiques during the late nineties that continue to resonate today . To refuse discursive determinism is not however to deny the importance of discourse, but to pay close, empirically grounded attention to its relations with history and practice. The first three chapters will begin by approaching South-South cooperation discourse in three domains: Chapter 1, South-South / North-South politics; Chapter 2, culture and history in Brazil-Africa relations; and Chapter 3, nature and agricultural development in the tropics. Each will seek to show how official discourse participated in context-making efforts, and then move on to look at its relations with front line practice. In this dissertation, however I will refer to discourse in two senses, which I try to differentiate. Most of the time, discourse will refer to a working tool consciously deployed by certain groups of actors in the field – most notably that of the diplomats, but also those in politics,raspberry container size academia and other intellectual circles. I tried to mark this specificity by qualifying it as official discourse rather than discourse in general. Official discourse in this sense is mostly concerned with a self-account of Brazilian cooperation. But one of my most forceful observations during fieldwork was how distant it could be from the practice of front liners.

The various chapters will suggest how, rather than describing the latter accurately or even shaping it directly, official discourse is more often than not disconnected from it: it follows a logic and productivity of its own that is largely circumscribed, by organizational and sociality lines, to diplomatic and more political and intellectual kinds of circles. Not that there are no relations between diplomats and front liners ; they not only exist as may play a significant part in cooperation activities. But as will be seen, they unfold in ways that do not follow a linear, coherent referential bridge between discourse and practice. The other way in which I talk about discourse here draws on the Saidian-Gramscian Foucauldian analytics found in much of the U.S. literature in the anthropology of development . In it, the Foucauldian view on knowledge production as part of the apparatus of power is refracted by Said’s postcolonial inflection and/or by Gramsci’s deeply historical approach to hegemony and special attention to political economy. Here, I will largely follow these refractions. Some of the discursive elements I will approach are long lasting and do seem to provide a common grammar that is shared by virtually everyone on the Brazilian side. I traced discourse in this sense to certain historical processes, in special those involved in shaping Brazil’s postcolonial condition. This discussion, which I have also started to entertain elsewhere , will be made explicit in Chapter 2. There I draw, besides on Said himself , on works on the question of post coloniality and modernity in Latin America in general, and Brazil in particular . In particular, some notions put forth by Portuguese sociologist Boaventura de Sousa Santos such as double colonialism, internal coloniality, and situated post colonalisms were highly productive for making sense of Brazil’s postcolonial condition as well as of its past and contemporary relations with Africa . Here, I have coalesced these and other insights into an attention to how coloniality11 operates in two, interrelated directions: both externally and internally to postcolonial nation-states.

What is framed as the postcolonial condition in general usually focuses on the international dimension. Some Latin American authors, on the other hand, have sought to specify its domestic dimension through the term internal colonialism . Few however have made a sustained, empirical and theoretical, investment in looking at the relations between these two . While this dissertation will focus on how this double directionality has played out on the Brazilian side, this perspective could also be useful for looking at equivalent processes on the African side.12 I will introduce it in Chapter 2, through a discussion about a kind of hegemonic discourse on Africa that I term Brazil’s nation-building Orientalism. But like coloniality itself, this double directionality can be found in dimensions beyond discourse, from political economy to culture, from agricultural development to geopolitics. Some of these will be brought in the other chapters, albeit not as explicitly as in Chapter 2. Finally, the postcolonial inflection will reappear in Chapters 4 and 5, which will provide an account of an ongoing technical cooperation project between Brazil and four countries in West Africa. In these final chapters, I will try to bring these insights to bear on questions raised by science and technology studies and vice-versa – not unlike those who have been working at the scholarly interface some have been calling postcolonial science and technology studies.13 This discussion will bring us back full circle to the question of North-South difference raised in the first chapter, but now hopefully enriched by the analytics of relationality deployed more broadly here.This Introduction has already drawn on various analytical idioms of relationality found in anthropology and science and technology studies : interfaces, emergence, scaling, assemblages, context-making, socio-technical networks, situatedness, or robustness. These and others evoke works from science studies, such as those by Marilyn Strathern , Bruno Latour and Donna Haraway . To these I add insights from works that tread the path opened up by these authors, but introduce important new twists such as De Laet and Mol , Hayden , Da Costa Marques , or the Deleuzian approach put forth by Jensen and Rödje . Less frequently, a similar perspective has been brought to bear on discussions on development, although rarely incorporating the techno-scientific dimension of projects . We do not need to delve too deep into micro-practice to realize the centrality of relations to the phenomenon approached in this dissertation: it is in the very hyphen in South-South. As Chapter 1 will suggest, the duplication of the term brought into relation, “South”, is meant to evoke horizontality: a leveling opposition to the asymmetry explicit in the North-South configuration. As the hyphen in African- or Native-American, however, the one in South-South denotes less hybridism than an interface – which, I have been arguing here, is characterized by being in emergence. The character of this relation is therefore largely underdetermined; it is a work in progress being actively, and in some cases reflexively, performed by those involved in practicing and thinking it . As was already indicated,raspberry plant container the ways in which this interface is being worked will be approached here most frequently through an analytics of context-making, scaling and domaining, after some of Strathern’s discussions on gender, kinship and audit cultures.

This emphasis on the production of context came out of the empirical observation that interactions between actors from both sides of the Southern Atlantic have unfolded through relational channels which are much less consolidated than the ones underlying relations between, say, Mali and France. Correspondingly, given the largely unprecedented character of these relations, much of my field interlocutors’ efforts have been directed towards making a context for them, in a more intensive, less bureaucratized, and reflexive way than its Northern counterparts. In Strathern’s prolific oeuvre, context-making has appeared alongside related operations such as analogy-making, scaling, and domaining, all of which were also salient in the discourses and practices observed during fieldwork. As Brazilians and Africans are brought together into South-South cooperation’s emerging interfaces, their relational effort proceeds largely through analogies based on their respective experiences. In this process, some contextual elements are differentially assigned to preexisting domains and scales; some are brought to the fore, while others are left to evanesce in the background or are altogether eclipsed. Although these operations strive to coherence, quite often they lead to contradiction and ambivalence, especially as they straddle different interfaces and the lag between official discourse and cooperation practice. Indeed, when there is an over investment in certain analogies at a discursive level – most notably, between Brazil’s and Africa’s peripheral conditions, cultural outlooks, natural environments, developmental paths –, they not always correspond to practical relations. But as I will argue in Chapter 2, this does not mean, as those who have remarked some of these mismatches before me suggested , that official discourse is false, deluding, or naïve. There is, rather, certain diffuse functionality to it, including as an effort to open up a path for turning – to use a classic organizing duality in anthropology –metaphor into metonym: that is, to incite the establishment of mutually transformative, exchange-intensive interactions between Brazilians and their African counterparts. However, those who come up with the most explicit discursive analogies are not necessarily the ones who will work the hardest in practice to entice and nourish metonymic relations. Chapters 3 and 4 will focus on the work of the latter – the cooperation front liners – as they strove to make a productive context for their relations with their African counterparts during capacity-building trainings and technology transfer efforts. In these activities, as Chapter 3 will suggest, demonstration has been the prevalent mode of engagement. Here, demonstration is evinced from a contrast with the notion of intervention, which denotes conventional views on the global North’s prevalent mode of engagement with Africa .They show how capacity-building has been performed less as the imposition of abstract, authoritative techno-scientific knowledge than as the demonstration of a particular kind of experience in agricultural development and research, making explicit its socio-technical entanglements and enticing the audience to participate in context-making. Demonstration is the basis of key modalities of TT in agriculture in Brazil, African countries, and elsewhere. Context describes the site to which technologies will be transferred, which, in common policy views, denotes an inert background for a bounded object. Chapter 4 will draw on part of the STS literature, especially Latour’s actor-network theory and works on technology transfer inspired by it , to recast the process of technology transfer as a co-production between contexts and technologies. Vital to this end will be to bring more emphatically to the fore the question of power, which is not readily evident in Latour himself . For this, I will recruit in Chapter 5 the notion of sociotechnical controls. “Socio-technical” draws on an epistemological-methodological assumption that has by now become part of STS’s commonsense: that there is nothing essential about nature or society, the task being to trace how this ontological boundary is empirically made by scientists and those with whom they interact. Controls, on the other hand, are part of an idiom that came to the fore during fieldwork, especially during my time in Mali. It was enticed by a perception that what Brazilian and African researchers and technicians were doing in their experimental activities was less about constructing scientific facts than about deploying, or trying to deploy, practical controls – experimental controls, most obviously, but also sociopolitical controls. In fact, I came to see one type of control as inextricably linked to the other, and both as linked to the question of power at large: it matters where and when techno-science is being carried out, after all. Latour’s Salk Institute is not the same as Mali’s Institut d’Economie Rurale and even Embrapa research units – and yet these are part of a common global techno-scientific assemblage, but unequally so. The last chapter will foreground this paradoxical aspect of the cotton project, intimately tied especially to Sub-Saharan Africa’s postcolonial predicament, by proposing to view techno-science as being about controlling the flow of vitalities in both nature and society, in a multi-scalar network ranging from subterranean mineral molecules to the global rules of the World Trade Organization.

We measured the child’s height and weight at the time that spirometry was performed

A total of 294 participants were included in either the prenatal or postnatal analyses. Participants included in this analysis did not differ significantly from the original full cohort on most attributes, including maternal asthma, maternal education, marital status, poverty category, and child’s birth weight. However, mothers of children included in the present study were slightly older and more likely to be Latino than those from the initial cohort. Women were interviewed twice during pregnancy , following delivery, and when their children were 0.5, 1, 2, 3.5, 5, and 7 years old. Information from prenatal and delivery medical records was abstracted by a registered nurse. Home visits were conducted by trained personnel during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5-years old. At the 7-year-old visit, mothers were interviewed about their children’s respiratory symptoms, using questions adapted from the International Study of Asthma and Allergies in Childhood questionnaire . Additionally, mothers were asked whether the child had been prescribed any medication for asthma or wheezing/whistling, or tightness in the chest. We defined respiratory symptoms as a binary outcome based on a positive response at the 7- year-old visit to any of the following during the previous 12 months: wheezing or whistling in the chest; wheezing, whistling, or shortness of breath so severe that the child could not finish saying a sentence; trouble going to sleep or being awakened from sleep because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold; or having to stop running or playing active games because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold. In addition, a child was included as having respiratory symptoms if the mother reported use of asthma medications,plastic gardening pots even in the absence of the above symptoms.Three identical EasyOne spirometers were used . Routine calibration was performed every morning and 92% of tests were conducted by the same technician.

The expiratory flow-volume curves were reviewed by two physicians experienced in pediatric spirometry, and only adequate quality data were included in the statistical analyses. Some participants with adequate quality data for FEV1 did not provide adequate quality data to calculate FVC or FEF25–75. Young children have difficulty sustaining forceful exhalation after a deep breath that is required to produce a plateau in airflow and calculate FVC and subsequently FEF25–75. Each child performed a maximum of eight expiratory maneuvers and up to three best acceptable tests were saved by the spirometric device software. Latitude and longitude coordinates of participants’ homes were collected during home visits during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5 years old using a handheld Global Positioning System unit . At the 7-year visit, mothers were asked if the family had moved since the 5-year visit, and if so, the new address was recorded. We used Geographic Information System software to geocode the new addresses and obtain coordinates. Residential mobility was common in the study population. We estimated the use of agricultural fumigants near each child’s residence using a GIS based on the location of each child’s residence and the Pesticide Use Report data . Mandatory reporting of all agricultural pesticide applications is required in California, including the active ingredient, quantity applied, acres treated, crop treated, and date and location within 1-square-mile sections defined by the Public Land Survey System . Before analysis, the PUR data were edited to correct for likely outliers with unusually high application rates using previously described methods . We computed nearby fumigant use applied within each buffer distance for combinations of distance from the residence and time periods . The range of distances best captured the spatial scale that most strongly correlated with concentrations of methyl bromide and 1,3-DCP in air . We weighted fumigant use near homes based on the proportion of each square-mile PLSS that was within each buffer surrounding a residence.

To account for the potential downwind transport of fumigants from the application site, we obtained data on wind direction from the closest meteorological station . We calculated wind frequency using the proportion of time that the wind blew from each of eight directions during the week after the fumigant application to capture the peak time of fumigant emissions from treated fields . We determined the direction of each PLSS section centroid relative to residences and weighted fumigant use in a section according to the percentage of time that the wind blew from that direction for the week after application. We summed fumigant use over pregnancy , from birth to the 7-year visit and for the year prior to the 7-year visit yielding estimates of the wind-weighted amount of each fumigant applied within each buffer distance and time period around the corresponding residences for each child. We log10 transformed continuous fumigant use variables to reduce heteroscedasticity and the influence of outliers, and to improve the fit of the models. We used logistic regression models to estimate odds ratios of respiratory symptoms and/or asthma medication use with residential proximity to fumigant use. Our primary outcome was respiratory symptoms defined as positive if during the previous 12 months the mother reported for her child any respiratory symptoms or the use of asthma medications, even in the absence of such symptoms . We also examined asthma medication use alone. The continuous lung function measurements were approximately normally distributed, therefore we used linear regression models to estimate the associations with residential proximity to fumigant use. We estimated the associations between the highest spirometric measures for children who had one, two or three maneuvers. We fit separate regression models for each combination of outcome, fumigant, time period, and buffer distance. We selected covariates a priori based on our previous studies of respiratory symptoms and respiratory function in this cohort. For logistic regression models of respiratory symptoms and asthma medication use, we included maternal smoking during pregnancy and signs of moderate or extensive mold noted at either home visit . We also included season of birth to control for other potential exposures that might play a causal role in respiratory disease , pollen , dryness , and mold. We defined the seasons of birth as follows: pollen , dry , mold based on measured pollen and mold counts during the years the children were born . In addition, we controlled for allergy using a proxy variable: runny nose without a cold in the previous 12 months reported at age 7. Because allergy could be on the causal pathway, we also re-ran all models without adjusting for allergy. Results were similar and therefore we only present models controlling for allergy. Additionally, for spirometry analyses only, we adjusted for the technician performing the test, and child’s age, sex and height. We included household food insecurity score during the previous 12 months , breastfeeding duration , and whether furry pets were in the home at the 7 year visit to control for other factors related to lung function.

We also adjusted for mean daily particulate matter concentrations with aerodynamic diameter ≤ 2.5 µm during the first 3 months of life and whether the home was located ≤150m from a highway in first year of life determined using GIS,blueberry pot size to control for air pollution exposures related to lung function. We calculated average PM2.5 concentration in the first 3 months of life using data from the Monterey Unified Air Pollution Control District air monitoring station. In all lung function models of postnatal fumigant use, we included prenatal use of that fumigant as a confounder. To test for non-linearity, we used generalized additive models with three-degrees of-freedom cubic spline functions including all the covariates included in the final lung function models. None of the digression from linearity tests were significant ; therefore, we expressed fumigant use on the continuous log10 scale in multi-variable linear regression models. Regression coefficients represent the mean change in lung function for each 10-fold increase in wind-weighted fumigant use. We conducted sensitivity analyses to verify the robustness and consistency of our findings. We included other estimates of pesticide exposure in our models that have been related to respiratory symptoms or lung function in previous analyses of the CHAMACOS cohort. Specifically, we included child urinary concentrations of dialkylphosphate metabolites , a non-specific biomarker of organophosphate pesticide exposure using the area under the curve calculated from samples collected at 6-months, 1, 2, 3.5 and 5 years of age . We also included agricultural sulfur use within 1-km of residences during the year prior to lung function measurement . We used similar methods as described above for fumigants to calculate wind-weighted sulfur use, except with a 1-km buffer and the proportion of time that the wind blew from each of eight directions during the previous year. The inclusion of these two pesticide exposures reduced our study population with complete data for respiratory symptoms and lung function . Previous studies have observed an increased risk of respiratory symptoms and asthma with higher levels of p, p’– dichlorodiphenyltrichloroethylene or p, p’-dichlorodiphenyldichloro-ethylene measured in cord blood . As a sensitivity analysis, we included log10- transformed lipid-adjusted concentrations of DDT and DDE measured in prenatal maternal blood samples . We also used Poisson regression to calculate adjusted risk ratios for respiratory symptoms and asthma medication use for comparison with the ORs estimated using logistic regression because ORs can overestimate risk in cohort studies . In additional analyses of spirometry outcomes, we also excluded those children who reported using any prescribed medication for asthma, wheezing, or tightness in the chest during the last 12 months to investigate whether medication use may have altered spirometry results. We ran models including only those children with at least two acceptable reproducible maneuvers . We ran all models excluding outliers identified with studentized residuals greater than three. We assessed whether asthma medication or child allergies modified the relationship between lung function and fumigant use by creating interaction terms and running stratified models. To assess potential selection bias due to loss to follow-up, we ran regression models that included stabilized inverse probability weights . We determined the weights using multiple logistic regression with inclusion as the outcome and independent demographic variables as the predictors. Data were analyzed with Stata and R . We set statistical significance at p<0.05 for all analyses, but since we evaluated many combinations of outcomes, fumigants, distances and time periods we assessed adjustment for multiple comparisons using the Benjamini-Hochberg false discovery rate at p<0.05 . Most mothers were born in Mexico , below age 30 at time of delivery , and married or living as married at the time of study enrollment . Nearly all mothers did not smoke during pregnancy. When cohort participants were 6 and 12 months old, most households showed signs of moderate or extensive mold at either visit. At age 7, based on maternal report, the majority of families was living below the Federal Poverty Level, 15.7% of cohort children experienced a runny nose without a cold within the past year, 16.3% displayed asthma symptoms, and 6.1% were currently taking asthma medication. Table 2 shows the distributions of wind-weighted fumigant use within 8 km of CHAMACOS residences during the prenatal and postnatal exposure periods. Methyl bromide and chloropicrin were the most heavily used fumigants during the prenatal period, with mean ± SD wind-adjusted use of 13,380 ± 10,437 and 8,665 ± 6,816 kg, respectively. Reflecting declines in methyl bromide use, the use of chloropicrin was greater than the use of methyl bromide during the postnatal period, with median values of 127,977 and 109,616 kg during the 7 years, respectively. When we examined correlations within each fumigant, use within 3, 5, and 8 km from the home was highly correlated for each fumigant . Fumigant use during the prenatal and postnatal periods was also highly correlated for methyl bromide and chloropicrin, but was not correlated for metam sodium use and was inversely correlated for 1,3-DCP use . We also examined correlations among fumigants and observed high correlations between prenatal methyl bromide and chloropicrin use and between prenatal metam sodium and 1,3-DCP use . There were negative correlations between prenatal methyl bromide and chloropicrin use with prenatal metam sodium and 1,3-DCP use .

Farmers are typically time and often resource constrained

While only including tillage treatments with residue incorporation establishes systems with similar residue input levels, it arguably poorly reflects farmers’ predominant practices in mixed crop-livestock farming systems – especially in sub-Saharan Africa and South Asia – in which residues tend to be exported from fields for feed, fuel, housing materials or other purposes . As such, the applicability of meta-analytical results to smallholder farming conditions in either sub-Saharan Africa and South Asia may be questioned. Given the large variation in crop management practices that result from differences in the scale of farming operations, the nature of farm enterprises and cropping patterns in different farming systems, one may therefore ask: Does the presentation of average results from ‘global meta-analyses’ in agronomy make sense? Our case studies show the ways in which the practical value of meta-analyses to provide comprehensive evidence on topics of development relevance is undermined by the social construction of treatment categories that may be decoupled from the conditions faced by farmers themselves.Most meta-analyses reviewed in this study used primary data from small-plot agronomic trials. The problems associated with extrapolating results from small plot experiments to whole fields, cropping systems and farming systems have however been widely acknowledged . These problems also affect meta-analysis. Many manage multiple separate fields – each of which may be environmentally heterogeneous – across landscapes. Farmers may therefore not be able to rigorously and evenly implement recommended crop management practices across fields and farm units with the same precision as researchers managing small-plot trials. This therefore casts some doubt about the usefulness of data from small-plot trials. Kravchenko et al. ,for example,blackberries in containers demonstrated that yield results from small-plot OA experiments were not always consistent with field-scale measurements of the same treatments.

Caution is therefore needed when extrapolating results from small-plot research to the field, farming system, landscape and global levels. These problems are most apparent in the OA case study. Badgley et al. , for example, extrapolated OA yield responses from plot studies to the global agricultural system, concluding that OA could feed the world’s population with nitrogen requirements supplied in situ by legumes, without expanding the footprint of agriculture. Connor conversely pointed out that soil moisture deficits would likely constrain the productivity of legumes in arid environments. He also noted that rotations with legumes may also not be feasible where legumes are less profitable or important than other crops for income generation and food production. Assessing productivity on a yield per unit of time basis, rather than yield alone, may therefore be an appropriate alternative in such comparisons . Leifeld also referenced landscape-scale considerations when contesting data presented by Ponisio et al. . He contended that OA is unable to cope with high-fecundity and rapidly dispersing pests, which could result yield losses more severe than observed in isolated, small-plot experiments. Leifeld also evoked ‘Borlaug hypothesis’ arguments that low-yielding farming systems may require the conversion of natural ecosystems to meet expanding food demand, thereby negatively affecting biodiversity. Ponisio and Kremen countered with evidence of the positive effects of organic and ecologically managed farmland on pest suppression at the landscape scale. They also highlighted the study of Meyfroidt et al. , who showed that higher yields and profitability can also drive agricultural expansion and deforestation under conventional practices. Considering the complexity of these problems, Brandt et al.proposed that bias could be reduced and science quality increased if researchers using meta analysis make their research protocols and intended methods publically available, for example, through online posting or journal publication, prior to undertaking meta-analysis. ‘Pre-registration’ of planned studies may be a logical suggestion , though it implies serious changes in research practice and re-thinking of how journals accept papers and conduct peer-review. This proposition has therefore not yet been widely applied in agronomy or other disciplines.

While there is no easy answer to how to rectify this conundrum, our review presents and important step in challenging underlying assumptions that meta-analysis can provide definitive and unifying conclusions as proposed by Garg et al. , Borenstein et al. , Rosenthal and Schisterman and Fisher .Agricultural expansion is the main cause of tropical deforestation , highlighting the trade offs among ecosystem services such as food production, carbon storage, and biodiversity preservation inherent in land cover change . Expansion of intensive agricultural production in southern Amazonia, led by the development of specific crop varieties for tropical climates and international market demand , contributed one third of the growth in Brazil’s soybean output during 1996–2005 . The introduction of cropland agriculture in forested regions of Amazonia also changed the nature of deforestation activities; forest clearings for mechanized crop production are larger, on an average, than clearings for pasture, and the forest conversion process is often completed in o1 year . How this changing deforestation dynamic alters fire use and carbon emissions from deforestation in Amazonia is germane to studies of future land cover change , carbon accounting in tropical ecosystems , and efforts to reduce emissions from tropical deforestation . Fires for land clearing and management in Amazonia are a large anthropogenic source of carbon emissions to the atmosphere . Deforestation fires largely determine net carbon losses , because fuel loads for Amazon deforestation fires can exceed 200 Mg C ha1 . Reductions in forest biomass from selective logging before deforestation are small, averaging o10 Mg C ha 1 . In contrast, typical grass biomass for Cerrado or pasture rarely exceeds 10 Mg C ha 1 and is rapidly recovered during the subsequent wet season . Yet, the fraction of all fire activity associated with deforestation and combustion completeness of the deforestation process remain poorly quantified . Satellite fire detections have provided a general indication of spatial and temporal variation in fire activity across Amazonia for several decades . However, specific information regarding fire type or fire size can be difficult to estimate directly from active fire detections because satellites capture a snapshot of fire energy rather than a time-integrated measure of fire activity .

Overlaying active fire detections on land cover maps provides a second approach to classify fire type. Evaluating fire detections over large regions of homogenous land cover can be instructive , but geolocation errors and spurious fire detections may complicate these comparisons, especially in regions of active land cover change and high fire activity such as Amazonia . Finally, postfire detection of burn-scarred vegetation is the most data-intensive method to quantify carbon emissions from fires. Two recent approaches to map burn scars with Moderate Resolution Imaging Spectroradiometer data show great promise for identifying large-scale fires , yet neither algorithm is capable of identifying multiple burning events in the same ground location typical of deforestation activity in Amazonia. Deriving patterns of fire type, duration and intensity of fire use, and combustion completeness directly from satellite fire detections provides an effi- cient alternative to more data and labor-intensive methods to estimate carbon emissions from land cover change. We assess the contribution of deforestation to fire activity in Amazonia based on the intensity of fire use during the forest conversion process,blackberry container measured as the local frequency of MODIS active fire detections. High confidence fire detections on 2 or more days in the same dry season are possible in areas of active deforestation, where trunks, branches, and other woody fuels can be piled and burned many times. Low-frequency fire detections are typical of fires in Cerrado woodland savannas and for agricultural maintenance, because grass and crop residues are fully consumed by a single fire. The frequency of fires at the same location, or fire persistence, has been used previously to assess Amazon forest fire severity , adjust burned area estimates in tropical forest ecosystems , and scale combustion completeness estimates in a coarse-resolution fire emission model . We build on these approaches to characterize fire activity at multiple scales. First, we compare the frequency of satellite fire detections over recently deforested areas with that over other land cover types. We then assess regional trends in the contribution of high frequency fires typical of deforestation activity to the total satellite-based fire detections for Amazonia during 2003–2007. Finally, we compare temporal patterns of fire usage among individual deforested areas with different post clearing land uses, based on a recent work to separate pasture and cropland following forest conversion in the Brazilian state of Mato Grosso with vegetation phenology data . The goals of this research are to test whether fire frequency distinguishes between deforestation fires and other fire types and characterize fire frequency as a function of post clearing land use to enable direct interpretation of MODIS active fire data for relevant information on carbon emissions.We analyzed active fire detections from the MODIS sensors aboard the Terra and Aqua satellite platforms to determine spatial and temporal patterns in satellite fire detections from deforestation in Amazonia during this period.

Combined, the MODIS sensors provide two daytime and two night-time observations of fire activity. Figure 1 shows the location of the study area and administrative boundaries of the nine countries that contain portions of the Amazon Basin. For data from 2002–2006, the date and center location of each MODIS active fire detection, satellite , time of overpass, 4 micron brightness temperature , and confidence score were extracted from the Collection 4 MODIS Thermal Anomalies/Fire 5-min swath product at 1-km spatial resolution . Beginning in 2007, MODIS products were transitioned to Collection 5 algorithms. Data for January 1–November 1, 2007 were provided by the Fire Information for Resource Management System at the University of Maryland, College Park based on the Collection 5 processing code. Seasonal differences in fire activity north and south of the equator related to precipitation were captured using different annual calculations. North of the equator, the fire year was July–June; south of the equator, the fire year was January–December. Our analysis considered a high-confidence subset of all MODIS fire detections to reduce the influence of false fire detections over small forest clearings in Amazonia . For daytime fires, only those 1-km fire pixels having 4330 K brightness temperature in the 4-mm channel were considered. This threshold is set based on a recent work to identify true and false MODIS fire detections with coincident high-resolution satellite imagery , comparisons with field data , and evidence of unrealistic MODIS fire detections over small historic forest clearings in Mato Grosso state with 420 days of fire detections per year in 3 or more consecutive years, none of which exceeded 330 K during the day. Daytime fire detections 4330 K correspond toa MOD14/MYD14 product confidence score of approximately 80/100. The subset of high-confidence fires includes all night-time fire detections, regardless of brightness temperature. Differential surface heating between forested and cleared areas during daylight hours that may contribute to false detections should dissipate by the 22:30 or 01:30 hours local time overpasses for Terra and Aqua, respectively. Subsequent references to MODIS fire detections refer only to the high-confidence subset of all 1-km fire pixels described earlier.The simple method we propose for separating deforestation and agricultural maintenance fires is based on evidence for repeated burning at the same ground locations. The spatial resolution of our analysis is de- fined by the orbital and sensor specifications of the MODIS sensors and the 1-km resolution bands used for fire detection. The geolocation of MODIS products is highly accurate, and surface location errors are generally o70 m . However, due to the orbital characteristics of the Terra and Aqua satellite platforms, the ground locations of each 1-km pixel are not fixed. We analyzed three static fire sources from gas , mining , and steel production in South America to identify the spatial envelope for MODIS active fire detections referencing the same ground location. Over 98% of the high-confidence 2004 MODIS active fire detections from Terra and Aqua for these static sources were within 1 km of the ground location of these facilities. Therefore, we used this empirically derived search radius to identify repeated burning of forest vegetation during the conversion process. High-frequency fire activity was defined as fire detections on two or more days within a 1-km radius during the same fire year.

TCS has been thought to act non-specifically by attacking and destroying bacterial membranes

In this study we have shown that feed backs are significant in both directions and have also shown that money and exchange rate shocks affect prices. Thus, any reduction in government expenditure in agriculture affects the path by which price shocks feedback on money and the exchange rate. From a policy perspective, this is very important, since it implies that any change in government support of the farm sector should be evaluated from an integrated market point of view. This more integrated or global perspective is needed because expenditures and budget deficits, monetary, exchange rate, and farm policies are significantly related and their interactions far too strong to be neglected. Triclosan is a non-agricultural pesticide widely used as an antibacterial agent in common medical, household and personal care products in the range of 0.1%–0.3% . The use of TCS has increased worldwide over the last 30 years . The broad household use of products containing TCS results in the discharge of TCS to municipal wastewater treatment plants, and it has been detected in effluents and sewage sludge in Europe and the United States .The mode of action of TCS on bacteria is through inhibition of fatty acid synthesis by targeting enzymes specific to bacteria . Since fatty acid biosynthesis is a fundamental process for cell growth and function; the ability to inhibit this makes TCS a particularly effective antimicrobial compound. Bio-solids are the nutrient-rich byproduct of wastewater treatment operations and large quantities are generated. For example, approximately 750,000 dry tons is produced annually in California and 54% of these bio-solids are applied on agricultural lands, 16% are composted and the remaining 30% goes to landfills . Concerns about potential health and environmental effects of land application of bio-solids include possible off-site transport of pathogens, heavy metals,growing raspberries in containers and trace organic constituents such as TCS . A less explored set of potential impacts is how TCS and other bio solid-borne contaminants affect ecosystem processes and associated soil microbial communities.

Potential impacts on soil microorganisms are important to assess since these organisms mediate much of the nitrogen, carbon and phosphorous dynamics in soil, biodegrade contaminants, create soil structure, decompose organic compounds, and play a major role in soil organic matter formation . We hypothesized that bio-solids containing TCS would have detrimental effects on soil microbial communities by decreasing biomass and altering community composition in agricultural soil. Our objectives were to evaluate the effects of increasing amounts of TCS on soil microbial community composition in the presence and absence of bio-solids. We used phospholipid fatty acid analysis to characterize the response of microbial communities; the method provides information about microbial community composition, biomass, and diversity . Experiments in which TCS was added to soil without bio-solids allowed the relative effects of bio-solid and TCS addition on microbial community composition and function to be compared and also provided a “secondary control” because TCS-free municipal bio-solids are essentially unavailable in the United States . Triclosan was purchased from Fluka . Yolo silt loam was collected from the Student Experimental Farm at the University of California, Davis at a depth of 0 to 15 cm. The soil was passed through a 2 mm sieve and stored at 4 °C until use. Bio-solids originated from a municipal wastewater treatment plant in Southern California that employed a conventional activated sludge treatment system followed by aerobic sludge digestion. Bio-solids from this system were selected for study because they had the lowest concentration of TCS among those collected from 10 different wastewater treatment plants in California . The soil and bio-solid physiochemical properties are reported in Table 1 and were determined using standard techniques . The soils were moistened to 40% water-holding capacity, which is equivalent to 18% water content in our experiments, and pre-incubated for 7 days at 25°C to allow time for normal microbial activity to recover to a constant level after disturbance. The pre-incubated 50 g of soil was weighed into 200 ml glass bottles to make three replicates per treatment. For the bio-solid amended soil sample, 20 mg/g of bio-solid was added. Each treatment sample was then spiked with TCS to achieve final concentrations of 10 or 50 mg/kg using TCS stock solutions prepared in acetone, as recommended by Waller and KooKana .

This spiking level was chosen as a conservative upper bound on anticipated soil concentrations in the field. The lower spiking level is below the mean concentration observed in US bio-solids and the higher level is below the 95th percentile for US bio-solids ; adding bio-solids to soils at typical application rates would produce soil concentrations ~50–200 times lower. Control samples were also prepared with acetone only. After that, the solvent was allowed to evaporate inside the fume hood before the samples were thoroughly mixed. The microcosms were incubated in the dark at 25°C for 0, 7 and 30 days. Every week, each vial was opened to help keep conditions aerobic and the water content of each set of samples was measured and water was added as needed to maintain target moisture levels. At each sampling time, the remaining TCS was measured by drying 3–5 g samples at 70°C for 24 hours and homogenizing with a mortar and pestle. Replicate 1 g subsamples of each dried sample were placed in centrifuge tubes, spiked with deuterated trichlorocarban in methanol, air dried under a fume hood to remove the methanol, and then mixed well. Extraction was performed by adding 15 mL of 1:1 acetone and methanol to the centrifuge tube. Samples were extracted on a shaker table for 24 hours at 295 rpm and 55 °C and then centrifuged for 30 min at 4,100 g. The supernatant was diluted as needed to ensure that the concentration remained within the linear portion of the calibration curve. The extracts were analyzed for TCS using LC-MS/MS. Additional details regarding the extraction and analysis procedures can be found in Ogunyoku & Young . Recoveries of deuterated TCC ranged from 63–115% during extraction and analysis.As expected, the bio-solids contained far larger amounts of nitrogen and carbon than the Yolo soil . Even though the bio-solids constituted less than 2% of the amended soil, they contributed nearly 50% of the total nitrogen and 40% of the total carbon in the amended soil system. The bio-solids contained an abundance of nutrients accumulated as by-products of sewage treatment in forms likely to be more labile than equivalent nutrients present in the soil. As will be discussed further, the greater availability of C and N in the SB than soil treatments had a strong influence on some of the results, especially at the early time points. In the following section, therefore, it is useful to remember that all SB treatments contain more available C and N than all soil treatments. The initial concentration of TCS in unspiked SB samples was very low ,large plastic pots for plants fell below the quantitation limit for TCS after 7 days, and was not detectable after 30 days of incubation. Significant TCS bio-degradation was observed in spiked soil and SB samples during incubation and the data were well described using a first order model as indicated by linear plots of ln against time . Degradation trends were consistent at the two spiking levels for each sample type but bio-solid addition significantly reduced degradation rates at both spiking levels compared with un-amended samples. The percentage of TCS removed was approximately two times greater in soil than in SB samples. Approximately 80% of the TCS was removed over 30 days in soil treated with either 10 mg/kg or 50 mg/kg of TCS, but no more than 30% was transformed in the corresponding SB microcosms.

The reduced bio-degradation in the SB microcosms may have resulted from the ~40% higher carbon content in the SB microcosms, which would be expected to increase the soil-water distribution coefficient by a comparable amount. Reduced TCS concentration in soil pore water would be expected to slow bio-transformation, potentially in a nonlinear fashion. Another possible contributor to the slower degradation of TCS in SB is the greater availability of alternative, likely more easily degradable, carbon sources in SB than soil microcosms, reducing the use of TCS as a substrate. Selective bio-degradation of one carbon source, and inhibition of the degradation of other chemicals also present, has been observed for mixtures of chemicals in aquifers . To assess which of these mechanisms was controlling, measured Freundlich isotherm parameters for TCS adsorption on bio-solid amended Yolo soil were used to calculate equilibrium pore water concentrations in the soil and SB microcosms over the course of the experiment. Using estimated pore water concentrations of moistened soil and SB samples, instead of total soil concentrations to perform half-life calculations, resulted in modest increases in the rate constants and decreases in half-lives of soil samples and did not narrow the significant gap between half lives in soil and SB . This suggests that the primary reason for the slower degradation of TCS in bio-solid amended soils is the increase in more labile forms of carbon because organic material is highly porous and has a lower particle density. Previous research shows that TCS biodegrades within weeks to months in aerobic soils , although Chenxi et al., found no TCS degradation in bio-solids stored under aerobic or anaerobic conditions, Kinney et al., observed a 40% decrease in TCS concentrations over a 4-month period following an agricultural bio-solids application. Because the slopes of the lines in Fig. 1 are not significantly different as a function of spiking level , the slopes were averaged for each treatment type, yielding apparent first order rate constants of 0.093±4% d−1 for soil samples and 0.024±41% d−1 for SB samples where the percent error represents the relative percent difference between the 10 mg/kg and 50 mg/kg degradation curves. These apparent rate constants translate to half-life estimates of 7.5 d in soils and 29 d in bio-solid amended soil. The estimated half-life of TCS in soil is within the range of previously reported half-lives of from 2.5 to 58 d in soil . The half-life determined here in bio-solid amended soils is lower than the one available literature value of 107.4 d . The microbial biomass decreased in the TCS spiked samples after 7 or 30 days of incubation in comparison with the unspiked controls, for both soil and SB, and the decline was statistically significant at 50 mg/kg . Although exposure to TCS caused declines in biomass in both soil and SB microcosms, the total microbial biomass was two times higher in SB than soil probably due to the increased availability of nutrients and/or possibly due to addition of bio-solid associated microorganisms in the latter . The total number of PLFAs ranged from 42–47 in soil and 48–59 in SB . No significant change in numbers of PLFAs was evident with increasing dosage of TCS for any incubation time suggesting that TCS addition did not adversely affect microbial diversity. Microbes respond to various stresses by modifying cell membranes, for example by transforming the cis double bond of 16:1ω7c to cy17:0, which is more stable and not easily metabolized by the bacteria, reducing the impact of environmental stressors . Consequently, the ratio of cy17 to its precursor has been employed as an indicator of microbial stress that has been associated with slow growth of microorganisms . Increases in this stress biomarker were observed in both soil and SB samples as TCS concentrations increased , suggesting that TCS has a negative effect on the growth of soil microorganisms. The overall level of cy17 to its precursor is lower in SB than soil samples, suggesting that nutrients contributed by the bio-solids reduce stress on the microbial community. Our study agreed with a previous study showed that carbon added to soil led to a reduction in the cy17 fatty acid TCS additions, however, increased the stress marker compared with that detected in the corresponding samples with no added TCS. A broader implication of this result is that presence of bio-solids may mitigate the toxic effects of chemicals in soil, or chemicals added in combination with bio-solids, on soil microbial communities. Groupings of microbial communities, based on CCA analysis of their composition as estimated by PLFA, were distinguished primarily by whether they were in soil or SB treatments and secondarily by time since spiking .

Successfully quantifying the ability of media to grow cells forms the backbone of the novelty of this dissertation

The other aspect of private sector involvement is perhaps more mixed in its consequences, compared to individual farmers’ efforts. Indian agriculture has long been heavily influenced by powerful intermediaries, who may combine participation in credit and input, and even output and land markets, to earn economic rents associated with market power, in a phenomenon wellstudied as interlinkage . Market intermediaries and other private actors in the agricultural supply chain certainly provide essential products and services for the success of Punjab’s present agricultural system, but it is not clear that their incentives for enabling innovation are aligned with maximizing social welfare, just as, with imperfect competition, static resource allocation may not satisfy that optimality property. Given the foregoing discussion, as well as the issues highlighted in previous sections, it is reasonable to suggest that beneficial innovation in Punjab agriculture will not occur solely through the private sector. At an abstract level, the problems of asymmetric information, externalities, the public good nature of innovations and imperfect competition in various markets along the agricultural value chain all point towards some public sector involvement in facilitating greater innovation, especially that which incorporates crop diversification. It is arguably the case that the state government can make targeted interventions that provide effective nudges towards innovation, as well as adoption and diffusion of innovations, even in the face of the severe constraints imposed by the state’s own fiscal situation, and the conduct of national food procurement policy. Some of the barriers to innovation have to be overcome by relatively large financial investments in physical infrastructure, but the state government can catalyze the private sector to undertake these investments by improving the ease of doing business in the state. The public sector’s focus can and should be on improving the knowledge available to farmers, finding ways to overcome their switching costs, and providing them with better insurance as they move towards activities that involve greater risk and uncertainty.Myoblast, myocytes, and fibroblasts are cells of greatest interest for the field of cellular agriculture. For texture and taste, plastic pots for planting adipocytes may be used and grown either separately or co-cultured with muscle cells. The choice of animal will also have an effect on the final product and production process because cells from different animals will have different growth characteristics, morphology, and product qualities.

The majority of these cell lines are adherent, meaning they require a suitable substrate to grow. Ideally, cells may be grown in suspension culture , bringing cellular agriculture in line with typical pharmaceutical practice such as CHO cells. Micro-carriers may also be used to increase the surface area of the total surface. Proliferating many cells is not the only consideration in cellular agriculture. Stem cells differentiate into more complex tissue structures depending on time and environmental conditions, which is critical in forming final products that consumers are willing to purchase. For example, C2C12 immortalized murine skeletal muscle cells differentiate into myotubes at high density and when exposed to DMEM + 2% horse serum . However, because cell differentiation often precludes further proliferation, cells must be periodically pass aged to provide more physical space for growth. This is typically done by detaching the cells from the substrate using trypsin enzyme and physically placing the cells onto additional surface area. Fundamental techniques in cell culture can be found in and a general overview of mammalian cell culture for bio-production uses can be found in . Figure 1.1b shows a high level overview of the cellular agriculture process. Throughout this entire process, media is used to support cells by providing them with nutrients, signal molecules, and an environment for growth. We are focused on reducing the cost of the media while supporting cell proliferation. This is because the media has been identified as the largest contribution to cost . The main considerations for the design of cell culture media in cellular agriculture are the media must be inexpensive, it must be free of animal products, and it must support long-term proliferation of relevant cell lines and final differentiation into relevant products. The most basic part of a cell culture medium is the basal component, which supplies the amino acids, carbon sources, vitamins, salts, and other fundamental building blocks to cell growth. The optimal pH of cell culture media is around 7.2 – 7.4 which is achieved through buffering with the sodium bicarbonate – CO2 or organic buffers like HEPES. Temperature should be maintained at around 37C at high humidity to prevent evaporation of media. Osmolarity around 260 – 320 mOsm/kg is maintained by the concentration of inorganic ions salts such as NaCl as well as hormones and other buffers. Inorganic salts also supply potassium, sodium, and calcium to regulate cell membrane potential which is critical for nutrient transport and signalling.

Trace metals such as iron, zinc, copper, and selenium are also found in basal media for a variety of tasks like enzyme function. Vitamins, particularly B and C, are found in many basal formulations to increase cell growth because they cannot be made by the cells themselves. Nitrogen sources, such as essential and non-essential amino acids, are the building blocks of proteins so are critical to cell growth and survival. Glutamine in particular can be used to form other amino acids and is critical for cell growth. It is also unstable in water so is typically supplemented into media as L-alanyl-L-glutamine dipeptide . Carbon sources, primarily glucose and pyruvate, are essential as they are linked to metabolism through glycolysis and the pentose-phosphate pathway. Fatty acids like lipoic and linoleic acid act as energy storage, precursor molecules, and structural elements of membranes and are sometimes supplied through a basal medium like Ham’s F12. Having a sufficient concentration of all of these components is required for proliferating mammalian cells across multiple passages as per above. Having a robust basal media is a necessary but not sufficient condition for long-term cell proliferation and differentiation. Serum is a critical aspect of cell culture because it provides a mix of proteins, amino acids, vitamins, minerals, buffers and shear protectors . Serum stimulates proliferation and differentiation, transport, attachment to and spreading across substrates, and detoxification. Serum has large lot-to-lot variability, zoonotic viruses and contamination , as well as the ethical issues associated with collecting serum from animals. Therefore, while it often simplifies cell growth and differentiation, it is critical to remove serum as per point . Supplementation with growth factors like FGF2, TGFβ1, TNFα, IGF1, or HGF is a common way to induce growth of mammalian muscle cells without the use of serum. Transferrin, another protein found in serum, fulfills a transport role for iron into the cell membrane. PDGF and EGF are polypeptide growth factors that initiate cell proliferation. Such components enhance cell growth but are expensive and comprise the vast majority of the cost of theoretical cellular agriculture processes. Much work has been done on developing serum-free media. The E8 / B8 medium for human induced pluri potent stem cells is based on Dulbecco’s Modified Eagle Medium / F12 supplemented with insulin, transferrin, FGF2, TGFβ1, ascorbic acid, and sodium selenite. Beefy-9 by is similar to E8 but with additional albumin optimized for primary bovine satellite cells. The approach we will take in this dissertation is to use prior knowledge of biological processes to construct a list of potential media components, and use design-of-experiments methods to optimize component concentrations based on cell proliferation. This will be particularly useful for cellular agriculture because by developing and using these statistical tools, as we will see in the next section, DOEs will help develop media quickly and efficiently.

One of the most difficult aspects of this work is measuring the quality of media. Viable cells must be counted after a period of time over which the scientist believes the medium will have an effect, which changes depending on cell type, media components, cell density, ECM, pH, temperature, osmolarity, and reactor configuration. If cells grow by adhering to a substrate, then sub-culturing / passaging may play a role on the health of a cell population,drainage for plants in pots so discounting this effect may have deleterious effects on media design quality. Counting using traditional methods like a hemocytometer or more advanced automatic cell counters using trypan blue exclusion are labor-intensive and prone to error. Cell growth / viability assays are chemical indicators that correlate with viable cell number such as metabolism or DNA / nuclei count and can also be used to quantify the effect of media on cells. In chapter 5 we conducted many experiments with different assays and show the inter-assay correlations in Figure 1.3. Notice no assay is perfectly correlated with any other assay because they are collected with different methodologies and fundamentally measure different physical phenomena. For example, Alamar Blue measures the activity of the metabolism in the population of cells, so optimizing a media based on this metric might end up simply increasing the metabolic activity of the cells rather than their overall number. As some of these measurements can be destructive / toxic to the cells , continuous measurements to collect data on the change in growth can be tedious. Collecting high-quality growth curves over time may be accomplished using image segmentation and automatic counting techniques. Using fluorescent stained cells and images, segmentation can be done using algorithms like those discussed. Cells may even be classified based on their morphology dynamically if enough training data is collected to create a generalizable machine learning model.The primary means by which this dissertation will improve cell culture media is through the application of various experimental optimization methods, often called design-of-experiments . The purpose of DOEs are to determine the best set of conditions xto optimize some output yby sampling a process for sets of conditions in an optimal manner. If an experiment is time / resource inefficient, then optimizing the conditions of a system may prove tedious. For example, doing experiments at the lower and upper bounds of a 30-dimensional medium like DMEM requires 2 30 109 experiments. This militates for methods that can optimize experimental conditions and explore the design space in as few experiments as possible. DOEs where samples are located throughout the design space to maximize their spread and diversity according to some distribution are called space-filling designs. The most popular method is the Latin hypercube , which are particularly useful for initializing training data for models and for sensitivity analysis. Maximin designs, where some minimum distance metric is maximized for a set of experiments, can also allow for diversity in samples, with the disadvantage being that in high dimensional systems the designs tend to be pushed to the upper and lower bounds. Thus, we may prefer a Latin hypercube design for culture media optimization because media design spaces may be >30 factors large. Uniform random samples, Sobol sequences, and maximum entropy filling designs, all with varying degrees of ease-of-implementation and space-filling properties, also may be used. It cannot be known a priori how many sampling points are needed to successfully model and optimize a design space because it is dependent on the number of components in the media system, degree of non-linearity, and amount of noise expected in the response. Because of these limitations, DOE methods that sequentially sample the design space have gained traction, which will be talked about in the next section. A more data-efficient DOE is to split up individual designs into sequences and use old experiments to inform the new experiments in a campaign. One sequential approach is to use derivative-free optimizers where only function evaluations y are used to sample new designs x. DFOs are popular because they are easy to implement and understand, as they do not require gradients. They are also useful for global optimization problems because they usually have mechanisms to explore the design space to avoid getting stuck in local optima. The genetic algorithm is a common DFO where a selection and mutation operator is used to find more fit combinations of genes . In Figure 1.7, notice the GA was able to locate the optimal region of both problems regardless of the degree of multi-modality.

Rk is inversely related to volume fraction which showed a non-significant decrease in photoaged samples

Although we did not directly compare skin equivalents without adipose to AVHSEs here or directly compare culture time points, we have not observed any obvious changes in epidermal coverage compared to our previous work in vascularized human skin equivalents that do not contain a subcutaneous adipose compartment. While the model is customizable to study the effects of intrinsic and extrinsic aging factors, as a test case we have demonstrated suitability for studies in UVA photo aging due to the strong literature base of both in vitro and in vivo studies available for comparison. Finally, we demonstrated the accessibility of the model for both molecular and morphological studies . A key aspect of any HSE model is a differentiated and stratified epidermis. Here, N/TERT-1 keratinocytes were used to generate skin epidermis as previously completed. Importantly, N/TERTs are a suitable and robust substitute to primary keratinocytes which have disadvantages including limited supply, limited in vitro passage capabilities, and donor variability. HSEs generated with N/TERT keratinocytes demonstrate comparable tissue morphology, appropriate epidermal protein expression, and similar stratum corneum permeability when compared to HSEs generated with primary keratinocytes. Similar to prior models, we demonstrate AVHSEs appropriately model the skin epidermis with correct localization of involucrin , and cytokeratin , and nuclei localized in the lower stratified layers . Further, volumetric imaging and automated analysis allows for epidermal thickness to be robustly calculated. AVHSE present with median epidermal thicknesses within 90-100 µm, similar to values in both prior in vitro studies 100-200 µm and in vivo optical coherence tomography imaging of adult skin 59±6.4 to 77.5±10 µm 194. Consistent with prior in vitro and in vivo results showing UVA wavelengths predominantly impact dermal rather that epidermal layers, UVA photoaging resulted in no observable changes in epidermal thickness or expression of differentiation markers in AVHSE . In the dermis and hypodermis,square plastic plant pot skin is highly vascularized with cutaneous microcirculation playing important roles in thermal regulation and immune function.

Many prior HSE models have not included a vascular component4; however, there is increasing recognition of its importance. In the present work, we used collagen IV as a marker of the vascular basement membrane, enabling the automated segmentation and mapping of a vascular network within AVHSEs. The vascular VF of AVHSEs is lower than in vivo dermis , but prior work has shown this is tunable by using different cell seeding conditions . Optimizing the VF may be more involved in the AVHSE, since the ratio of adipose and vascular cells has been shown to be important in regulating tissue morphology. Thus, ratio of adipose and vascular cells would need to be optimized again for new cell and collagen densities. Adipose tissue is densely vascularized and the ability of adipocytes to generate lipid droplets and adipokines in the presence of endothelial cells is important to replicate the in vivo environment. Previous work has shown that in co-culture of endothelial cells and mature adipocytes can lead to dedifferentiation of mature adipocytes, but in homeostatic cultures ECs and adipocyte crosstalk is important. Through soluble factor release, ECs regulate lipolysis and lipogenesis and adipocytes regulate vasodilation and contraction. Secretion of adipokines by adipocytes aids vascular formation and adipose tissue stability. In prior work, Hammel & Bellas demonstrated that 1:1 is the optimal ratio for vessel network within 3D adipose, and we matched the 1:1 cell ratio in the present work. Quantification of vessel diameter in the Hammel & Bellas study shows that a 1:1 ratio of adipocytes to endothelial cells gives an average vessel diameter of ~10 µm235, our work supports this finding with a median inner vessel diameter of ~6 µm. Importantly, these data are within the range of human cutaneous microvascular of the papillary dermis . We did not observe morphological changes of VF and diameter within the vasculature due to photoaging. This is not entirely unexpected, as UVA exposure and its effects on vasculature are still poorly understood. While it is established that chronic UVA exposure can contribute to vascular breakdown, the duration of our studies may be too short to see this effect in diameter and VF . However, photoaging did induce an increase in diffusion length . Rk is a measure of the 90th percentile of distance from the vascular network and so a higher value corresponds to less coverage; values presented here match previous studies of vascularized collagen.

Rk of the vascular network for both control and photoaged samples was within the range of 51 – 128 µm which is importantly below the 200 µm diffusion limit. Upon photoaging, AVHSEs did demonstrate a significant increase in Rk compared to controls.In vascularized tissue, a high VF and low Rk is preferable and the Rk increase demonstrated indicates a loss in vascular coverage in photoaged AVHSEs. These findings conflict with studies of acute UV exposure in skin, which show stimulation of angiogenesis. It has been proposed that UV light exposure may improve psoriasis by normalizing disrupted capillary loops through upregulation of VEGF by keratinocytes. The AVHSE model could be used to more thoroughly test the effects of UV light and other molecular mechanisms it induces in future studies. The vascular networks extend from adipose to the epidermal-dermal junction , consistent with previous literature and to normal human skin histology/stereography. Further, we observed vasculature colocalized with the lipid droplet BODIPY staining , indicating recruitment of the vascular cells to the hypodermis. Importantly, the vascular networks in prior studies and the present AVHSE are self-assembled. While there are advantages to self-assembly, especially the simplicity of the method, it is important to note the limitations. Cutaneous microcirculation in vivo has a particular anatomical arrangement with two horizontal plexus planes, one deep into the tissue in the subcutaneous fat region and one just under the dermal-epidermal junction. Between these two planes are connecting vessels running along the apicobasal axis that both supply dermal tissues with nutrients and are an important part of thermoregulation. Although the AVHSEs presented here are fully vascularized up to the epidermal junction they do not recapitulate this organization. While not covered in this work, future studies could incorporate layers of patterned or semi-patterned vasculature to more closely match the dermal organization, depending on the needs of the researcher. In contrast to the epidermal and some vascular components, photoaging impacted the hypodermis. Volumetric imaging of BODIPY, which stains lipid droplets, was used to identify the adipose. While small reductions in the morphological parameters were observed, they were not significant, suggesting there was not large-scale necrosis or loss of fat mass. However, there was a significant decrease in the intensity of BODIPY staining, indicating decreased lipid levels. This is consistent with photoaging of excised human skin showing that UV exposure decreases lipid synthesis in subcutaneous fat tissue228. We further collected culture supernatant and tested for the presence of adiponectin, IL6, and MMP-1. The data collected through ELISA show that this AVHSE model secretes both adiponectin and IL6, which are also present innative skin and both considered important adipokines. Elevated serum adiponectin levels are linked to anti-inflammatory effects in humans and centenarians have elevated levels of adiponectin.Decreased adiponectin has previously been associated with photoaging in both excised human skin that was sun-exposed compared to protected skin and in protected skin that was exposed to acute UV irradiation. Conversely,25 liter square pot IL6 is a key factor in acute inflammation in skin, and has been shown to regulate subcutaneous fat function. In prior studies of photoaging, IL6 has demonstrated an increase after UVA irradiation in monolayer fibroblast cultures and excised human skin. IL-6 is released after UV irradiation and has been linked to decreased expression of adipokine receptors and mRNA associated with lipid synthesis, decreases in lipid droplet accumulation, and enhanced biosynthesis of MMP1. However, after one week of photoaging we did not observe an increase in IL-6 or MMP-1 via ELISA. The absence of changes in IL-6 and MMP-1 expression but decreases in lipid accumulation and adiponectin are not expected results but they could be due to methodology differences in UVA exposure. We determined our UVA dose and exposure based on literature values. The dose used here was 0.45 ± 0.15 mW/cm2with exposure for 2 hours daily for 7 d which roughly converts to 3.24 J/cm2 per day and a total of 22.68 J/cm2.

Many studies do not report exposure time and/or present ambiguous time points. This, compounded with the practice of using doses based on sample pigmentation threshold and broad definition of UVA wavelengths is likely contributing to the discrepancy in IL-6 and MMP-1 expressions. Previous work has shown that neutralizing anti-IL-6 antibody prevents UV induced decrease of important fat associated mRNA and that IL-6 secreted from keratinocytes and fibroblasts following UV irradiation inhibits lipid synthesis. From previous work, it is clear that IL-6 secretion is upregulated by UVA and presence contributes to negative adipose function but more investigation is necessary to understand what UVA doses and exposures induce IL-6 and further at what time points after photoaging are these expressions quantifiable. In this model, it is possible that there were increases in IL-6 that contributed to adiponectin decreases in photoaged samples, these trends may have been caught with different media collection time points. Alternatively, other analysis of inflammatory responses and adipokines may show generalized inflammatory responses identified in literature and further, changes in dose/exposure or continued photo aging may mimic the previously shown effects. There are notable limitations of the AVHSE model presented. Although we have presented a skin model that is closer to both anatomy and biology of human skin in comparison to past HSEs, we have not modeled skin fully through inclusion of other features of in vivo skin such as immune and nerve components. Including a functional immune system is important in understanding autoimmune diseases, cancer, wound healing, and decline of immune function in aged skin. Additionally, neuronal cell inclusion will allow for modeling of sensory processes necessary for grafting and modeling of skin disorders associated with nerve dysregulation. Further, while the cell lines used in this study were chosen for their low cost and accessibility, primary cells or populations differentiated from induced pluripotent stem cells would more closely match the physiology in vivo. While changing cell populations would likely require some adjustment to the culture system, we have previously demonstrated that cell types can be replaced with minimal changes. We model epidermis, dermis, and hypodermis here, but we do not model the depth that is present in thick skin tissue; to mimic thicker skin the model would need to be taller. As nutrient and waste diffusion in tissues is limited to ~200 µm, thick tissues will likely require perfusion to maintain throughout culture. Vasculature in thicker skin has higher diameters, especially in the lower dermis and hypodermis, these can be up to 50 µm. Finally, for ease of use, initial collagen density in the AVHSE model is 3 mg/mL, much lower than in vivo densities. Decline of collagen density is an important aspect of skin aging, correlating with skin elasticity and wound healing. Varying collagen density influences vascular self-assembly, but higher collagen densities are possible through a variety of techniques, including dense collagen extractions146, and compression of the collagen culture. By incorporating these tools, AVHSE could be modified to more closely represent the in vivo dermal matrix. Further, the AVHSE method was demonstrated with low serum requirements; but serum was used for initial growth and the cultures are maintained for weeks without serum. Serum replacements during the growth phase could potentially provide a chemically defined xeno-free culture condition in beginning culture stages for greater reproducibility and bio-compatibility. The presented AVHSE model provides unique capabilities compared to cell culture, ex vivo, and animal models. Excised human skin appropriately models penetration of dermatological products but there is limited supply and high donor variability; replacing excised human skin with animal models or commercially available skin equivalents is not the best course of action because of the differences such as varying penetration rates, lipid composition, lipid content, morphological appearance, healing rates, and costs; and limitations of customization. AVHSE can be cultured using routinely available cell populations, are cost effective, and are customizable for specific research questions. Further, the model is accessible for live imaging, volumetric imaging, and molecular studies, enabling a wide range of quantitative studies.

Pre‐processing included development of PET estimates from the down scaled air temperature

We report three analyses: trends in the time slice characterizing the baseline time period ; the calibration and validation of basin discharge by comparing post‐processed runoff and recharge measures to derive discharge, and comparing that value to streamgage measurements; and a comparison of the historical and future conditions for BCM variables—precipitation, potential evapotranspiration, runoff, recharge, and climatic water deficit. We present the map‐based assessments, using the difference in magnitude for each variable; the number of standard deviations by which projected future conditions will differ from the standard deviation of baseline conditions; and the geographic variations across California of both historical and future projections. Temperature values are available, but for brevity, and because temperature has previously been more widely reported, this paper focuses on hydrological components.The process used to estimate hydrologic impacts of climate change at fine scales involved down scaling climate data for model input.The BCM then generated outputs as a series of hydrologic and associated variables. This section discusses: precipitation, air temperature, PET, snow pack, runoff, recharge, and climatic water deficit.During the 30‐year baseline period of 1971–2000, precipitation generally increased, with the exception of the deserts and eastern Sierra Nevada . Largest percentage increases are in the Great Valley, Central Western California, and Sierra Nevada. Both minimum and maximum air temperatures increased for all ecoregions, ranging from 0.5°C to 1.6°C for minimum air temperature and much less of an increase for maximum air temperature . Potential evapotranspiration increased throughout the state by about 3 percent. Recharge decreased by up to 24 percent in southwestern California, and by 11 percent in northwestern California, while all other ecoregions increased in recharge. Recharge in the Mojave Desert increased by 51 percent ,pot with drainage holes and in the Modoc Plateau by 42 percent . The change in climate over the 30‐year period is exemplified by the changes in snow pack in California, which integrates effects of precipitation and air temperature on the dominant water resource in California for water supply.

The snow pack in this region is the warmest in the western United States and is the most sensitive to small changes in air temperature. This is illustrated by the change in April 1 snow pack , where snow pack has diminished the most in extent in the northern portions of the state; whereas, the highest elevation snow pack in the southern Sierra Nevada mountains and Mount Shasta have actually increased in some locations. However, the dominant loss of April 1 snow pack results in less runoff to extend surface water resources throughout the summer season. This situation has implications for recharge and climatic water deficit as well. Corresponding to increases in precipitation, runoff increased over the baseline period in most locations in the state, notably the northern Sierra Nevada Mountains and parts of the Trinity Mountains in the northwestern ecoregion . Some declines are noted in the northwest, where the smallest change in precipitation occurred. Decreases in recharge are notable in the northwest portions of the state, with moderate decreases in the Sierra Nevada foothills and southern California mountains . Generally locations with little to no recharge, such as areas with deep soils or arid climate, also had little to no change in recharge indicated. Detailed views of basins in the Russian River watershed and Santa Cruz mountains are shown in Flint and Flint , illustrating the dominance of runoff in the Russian River watershed, where water supply relies heavily on reservoirs, in contrast to the reliance on groundwater resources and recharge in the Santa Cruz mountains. Increases in runoff in snow‐dominated regions, due to warming air temperatures, diminishes recharge, which is more likely to occur during the slow snowmelt season. This is confirmed for the northwestern ecoregion, where the Trinity Alps decreased in snow pack, and shows small increases for the Sierra Nevada, in contrast to other regions .Figure 9a shows the average annual climatic water deficit for 1971–2000. There is high climatic water deficit in the southern Central Valley and Mojave and Sonoran Deserts, and low climatic water deficit in the north coast and Sierra Nevada. Climatic water deficit declined over the baseline period in the central and northwestern California ecoregions and the Great Valley, while in all other regions, despite the increases in precipitation, climatic water deficit increased . This variable integrates energy loading and moisture availability from precipitation with soil water holding capacity. The distribution of moisture conditions that generally define the amount of water in the soil that can be maintained for plant use throughout the growing season and summer dry season corresponds very well to the established distribution of vegetation types. However, in many locations, shallow soils limit the contribution of precipitation. The lowest climatic water deficits in California are in regions with snow pack that, as it melts in the springtime, provides a longer duration of available water, thus maintaining a lower annual climatic water deficit, even despite shallow soils.

Locations in the south with higher PET have higher climatic water deficits.Precipitation has increased in most locations, but has declined in the desert and eastern Sierra Nevada. Air temperature and PET have increased in all ecoregions . This translates into increases in climatic water deficit in nearly all locations, and particularly those dominated by snow pack, such as the Sierra Nevada ecoregion and Trinity Mountains in the northwestern California ecoregion. The recorded increases in air temperature, particularly minimum air temperature, result in earlier snowmelt and reduce the ability of the snow pack to sustain the water available throughout the summer season. The deserts all increased in deficit with declining precipitation and increasing air temperature. However there are some small areas in the Great Valley ecoregion that experienced small decreases in deficit because of the ability of the deep soils to store the additional precipitation rather than result in recharge or runoff. Some moderating effects of coastal climatic conditions are seen in small valleys along the coast with decreases in deficit.In the analysis of the impacts from historic to future climate on hydrology, we characterized the changes in precipitation, PET, runoff, recharge, and climatic water deficit from the BCM for watersheds and for ecoregions, and compared changes in variables from historical to baseline periods and from the baseline period to the end of the twenty‐first century. Three types of map analyses were applied to this comparison: assessment of the difference in magnitude for each variable; the number of standard deviations of baseline conditions by which historic and projected future conditions differ; and a geographic review of the variations in hydrologic conditions across California for both historical and future time periods. A summary of variables by modified Jepson ecoregion and for the HUC 12 watersheds averaged for the extent of California was calculated. Overall, mean precipitation increased by 80 millimeters between 1911–1940 and 1971–2000 . Under the PCM scenarios, precipitation continued to increase to 2070–2099 , but it decreased under the GFDL scenario . Potential evapotranspiration increased 10 mm from historic to baseline time frames,large pot with drainage and increased under all future time frames between 51 and 104 mm. Runoff increased historically 36 mm. It increased under future PCM projections by 51 to 77 mm, but decreased under GFDL projections by 38 to 42 mm.

Finally, climate water deficit decreased by 16 mm from historic to baseline time; however, it increased under all projections between 40 and 174 mm, indicating increases in PET and decreases in available soil moisture resulting in lower actual evapotranspiration.While most of northern California got wetter from the historic to baseline time, only the northeast, an eastern area representing the high Sierra Nevada and Inyo/White mountains, and a few scattered watersheds saw an increase that was even one‐half a standard deviation from the baseline SD for the 30‐year mean, a pattern that is mostly repeated when looking at the statistically significant trends . This suggests that the trend in increased moisture is well within the baseline variability of precipitation from year to year. The same is true for the southern half of the region, which mostly shows a drying trend. As expected, given the GCMs selected, the PCM future scenarios forecast increased precipitation, and GFDL forecasts a drier future . However, compared to baseline precipitation variability and statistically significant change, only the desert ecoregions receive more than 0.5 SD more precipitation under PCM, while under GFDL A2, the northern half of California loses precipitation mostly between 0.5 and 0.9 SD . The calculation of PET using the Priestley‐Taylor equation assumes that PET is a function of, and is non‐linearly related to, air temperature. The application of PET in the BCM assumes that plants are in equilibrium with their environment and will transpire at maximum rates until the soil reaches the wilting point. Potential evapotranspiration increased from historical to baseline time periods in most of California, with the exception of a few places in the Sierra Nevada, where it decreased between 0.5 and > 2 SD of baseline PET values, with similar patterns in the significance values . The extreme change in these locations is due to cooling air temperature, but because PET is already low in these locations, due to the non‐linear relation between PET and air temperature, the change is greater than if the PET were initially high. Potential evapotranspiration is projected to increase under all scenarios and for all ecoregions and shows one of the strongest spatial patterns of all the variables, with nearly the entire region increasing by at least 1 SD, and statistically significant under the PCM projections, and by > 2 SD under the GFDL projections .Annual runoff values increased slightly in California between 1911–1940 and 1971–2000 , a change driven by increases throughout the northwest ecoregion, and in the northern Sierra Nevada. Looking at this difference relative to the standard deviation during the baseline time period, none of the watersheds had runoff increase by more than one standard deviation, but a few in the desert ecoregions decreased by more than one. This is because the annual runoff in these watersheds was less than 3 mm in 1911–1940 and less than 1 mm in 1971–2000. Comparing the baseline conditions to future scenarios , the PCM model shows an increase in runoff for all ecoregions except the Modoc Plateau , and especially in the Sierra Nevada and the coast ranges, while the GFDL model shows an almost inverse pattern of drying . Because of the very low runoff values in the baseline time period, the incremental increases in the desert regions of the study show future runoff to be above 1 SD under the PCM model. For the GFDL model, parts of the Sierra Nevada and the northeast region of the state show decreases in runoff above 0.5 SD of baseline . Note that statistically significant change differs from the SD view under the future scenarios, particularly in the desert systems, where much of the change while high in terms of standard deviations is not significant at the 0.05 level .Annual recharge values increased throughout the mountains and coast of northern California between 1911–1940 and 1971–2000 , similarly to runoff in distribution, but at a lower magnitude. Declines in recharge in the southern parts of the state and the Central Valley are at a similar magnitude. The difference between the time periods relative to the standard deviation during the baseline time period indicated very small changes outside the normal variability. The differences between recharge and runoff are more pronounced in the changes between baseline and the future scenarios . This difference is exemplified by a very important characteristic that results from warming, regardless of the direction of change in precipitation in future projections, and that is the alteration of seasonality, with a shorter wet season and longer dry season.For the wet scenarios , there are slight increases in recharge in the Central Western and Great Valley ecoregions , and the Cascade and Sierra Nevada , but in contrast to runoff there are declines in recharge in the Sierra foothills and the northwestern part of the state. Because of the compression of the wet season with warming, , in addition to the earlier onset of springtime snowmelt, there is less time with conditions conducive to recharge.

The calculation of excess water provides the water that is available for watershed hydrology

Modeled PET for the southwest United States has been calibrated to measured PET from California Irrigation Management Information System and Arizona Meteorological Network stations. Using PET and gridded precipitation, maximum and minimum air temperature, and the approach of the National Weather Service Snow‐17 model , snow is accumulated, sublimated, and melted to produce available water . These driving forces for the water balance have been calibrated regionally to solar radiation and PET data, and snow cover estimates have been compared to Moderate Resolution Imaging Spectroradiometer snow cover maps . However, the final calibrations of snowmelt and runoff have illustrated goodness‐of‐fit, as will be shown in the results.Available water occupies the soil profile, where water will become actual evapotranspiration, and may result in runoff or recharge, depending on the permeability of the underlying bedrock. Total soil‐water storage is calculated as porosity multiplied by soil depth. Field capacity is the soil water volume below which drainage is negligible, and wilting point is the soil water volume below which actual evapotranspiration does not occur . Once available water is calculated, it may exceed total soil storage and become runoff, or it may be less than total soil storage but greater than field capacity and become recharge. Anything less than field capacity will be calculated as actual evapotranspiration at the rate of PET for that month until it reaches wilting point. When soil water is less than total soil storage and greater than field capacity,black plastic planting pots soil water greater than field capacity equals recharge. If recharge is greater than bedrock permeability , then recharge = K and excess becomes runoff, else it will recharge at K until field capacity.

Runoff and recharge combine to calculate basin discharge, and actual evapotranspiration is subtracted from PET to calculate climate water deficit.The BCM can be used to identify locations and climatic conditions that generate excess water by quantifying the amount of water available either as runoff generated throughout a basin, or as in‐place recharge . Because of the grid‐based, simplified nature of the model, with no routing of runoff to downstream cells, long time series for very large areas can be simulated easily. However, if local unimpaired stream flow is available, estimated recharge and runoff for each grid cell can be used to calculate basin discharge that can be extrapolated through time for varying climates. In addition, the application of the model across landscapes allows for grid‐based comparisons between different areas. Because of the modular and mechanistic approach used by the BCM, it is flexible with respect to incorporating new input data or updating of algorithms should better calculations be derived. A flow chart indicating all input files necessary to operate the BCM, and the output files resulting from the simulations, is shown in Appendix A. After running the BCM, the 14 climate and hydrologic variables were produced in raster format for every month of every year modeled . To evaluate hydrologic response to climate for all basins in hydrologic California, we used the BCM to calculate hydrologic conditions across the landscape for 1971–2000 and to project them for the two GCMs and two emission scenarios for 2001–2100. Trends in climate, hydrologic derivatives of runoff and recharge, and climatic water deficit are separately analyzed for both historical‐to‐baseline, and baseline‐to‐future time periods . Although recharge and runoff were calculated for every grid‐cell and summarized as totals for basins, the estimate of basin discharge as a time series requires a further calculation of stream flow. Calculation of stream flow uses a series of equations that can be calibrated with coefficients from existing streamgage data, that then permit estimation of basin discharge for time periods when there are no stream flow measurements. We calculated basin discharge for each of 138 basins for which we also obtained streamgage data, and used the 138 streamgage datasets for calibration and validation.

The regional BCM developed for the southwest United States was applied to California following regional calibrations for solar radiation, PET, snow cover, , and groundwater . The California calibration is based on study areas with ongoing studies that were designed to provide runoff and recharge for historic, baseline, and future climatic conditions. Generally the watersheds used for calibration basins were identified on the basis of lack of impairments, such as urbanization, agriculture, reservoirs, or diversions, although this was not always possible.We used 68 basins for which bedrock permeability was iteratively changed to optimize the match between calculated basin discharge and measured stream flow. Calibration basins represent 9 of the 14 dominant geologic types in California, and have been calibrated to bedrock permeability on the basis of mapped geology for California . The BCM performs no routing of stream flow, which is done as post‐processing to produce total basin discharge for any basin outlet or pour point of interest, such as streamgages or reservoirs. The 68 calibration basins were calibrated to optimize the match between BCM‐derived discharge and stream flow by iteratively adjusting the bedrock permeability corresponding to the geologic types located within the basins to alter the proportion of excess water that becomes recharge or runoff. This part of the calibration process is followed by accounting for stream channel gains and losses to calculate basin discharge, optimize the fit between total measured volume and simulated volume for the period of record for each gage, and maintain a mass balance among stream flow and BCM recharge and runoff.For comparison to the calibration basins, and to evaluate model performance representing the state, additional validation basins were identified for the calculation of discharge on the basis of general lack of impairments, as well as statewide coverage of landscapes and geology. Hydrologic results for these basins were developed on the basis of the calibration to bedrock permeability performed using the calibration basins. The calibrations and validation basins are distributed across the range of elevation, aridity , and bedrock permeability, in comparison to all basins in California , and we also show the relationship between them for the same three environmental conditions . Study basins generally cover the range of elevations for the state .

Bedrock permeability as a representation of geology is dominated by lower permeability basins because very high permeability basins, such as those with alluvial valley fill, do not generate stream flow .The range of climates in the state, represented by the UNESCO Arid Zone Research program aridity categories , is covered less well by the study basins and neglects the hyper‐arid and arid locations due to lack of stream flow data . The representation of study basins within the ecoregions in the state also reflects the lack of streamgage data in the desert areas, as well as in the eastern side of the Sierra Nevada, and in the deep soils of the Central Valley , where any gaged streams are very impaired.Calibration statistics are shown in Appendix C and spatially in Figure 6, with the linear regression r2 for monthly and yearly comparison of measured and simulated basin discharge, and the Nash‐Sutcliffe efficiency statistic calculated as 1 minus the ratio of the mean square error to the variance. The NSS is widely used to evaluate the performance of hydrologic models, generally being sensitive to differences in the observed and modeled simulated means and variances, but is overly sensitive to extreme values, similarly to r2 . The NSS ranges from negative infinity to 1, with higher values indicating better agreement. Average calibration statistics for all basins are NSS = 0.65, monthly r2 = 0.70, and yearly r2 = 0.86.In our study, calibration basins have a mean NSS of 0.71 , with the higher values for the Russian River basin, just north of the San Francisco Bay Area,drainage planter pot and lower values for the Santa Cruz basins, just south of the Bay Area, where there are many urban impacts . There are several cases where urbanization and agriculture were identified as factors resulting in the inability to calculate a mass balance. The measured stream flow at Aptos Creek at Aptos had very high peaks that were not reproduced by the BCM. This basin is dominated by urbanization, suggesting that the high peak flows were a result of urban landscapes enhancing runoff, both during precipitation events where there is reduced infiltration and during the summer when urban runoff is enhanced—neither of which is taken into account in the BCM. In order to match measured volumes and stream flow patterns, the runoff is reduced by 80 percent, and the recharge is reduced by 50 percent. An example of diversions and groundwater pumping for public use can be seen in the difference between the Merced River at Happy Isles, upstream of Yosemite Village, and the Merced River at Pohono, downstream of Yosemite Village, where the percentage of runoff is reduced to 45 percent to match measured flows .

The basin discharge for the validation basins, not used for calibration, was developed using the adjusted bedrock permeability values developed during calibration. The mean NSS for these basins is 0.61 , with the upper Klamath and small basins in the Modoc Plateau volcanics performing the poorest . This is likely due to the large groundwater reservoir in the volcanics that has very long travel times from precipitation input to outflow in streams. An example of a calibration in the volcanics for the Sprague River basin illustrates the large base flow component with high base flow exponent . The Sprague River basin also has a large agricultural component and return flows, so the attempt to maintain a match in volumes results in an overestimate of the peak flows. The presence of a groundwater reservoir also shows in the differences between the r2 values for the monthly and yearly values , which identifies lags in the monthly calibration between measured and simulated discharge that are negated when calculated yearly. There is a large difference for the Kings River above the North Fork near Trimmer, for example, indicating the potential for a lag in groundwater flows becoming base flows that appear at the base of the basin and not being accounted for in a monthly model; whereas, the yearly r2 is very high. The basins in the volcanics consistently show a larger range in the two r2 values, which is also illustrated in the Sprague River near the Beatty, Oregon, calibration by the mismatch in the timing of the peaks. For California, we produced 270 m grids to represent historic and future climates from 1900 to 2100, resulting in 6,594,862 grid cells statewide, and a map for each of the 14 variables for each month. For the historic data and four future scenarios, this produced over 11 terabytes of data. We then created water year summaries of the 14 variables. The water year starts in October and ends in September. For the two temperature variables we averaged the temperature over the water year, and for the other 12 variables we summed all data for 12 months Since retaining yearly values for this region results in unwieldy large files, we reduced the data size for distribution and analysis to 30‐year summaries, providing monthly average values for variables historically for 1911–1940, 1941–1970, and 1971–2000. Future climate values are based on 100‐year simulations, with 2010–2039, 2040–2069, and 2070–2099 time slices produced. We also developed summaries for 10‐year periods based on time slices starting with 1911–1920 and running through 2090‐2099. Appendix D has a list of all available variables, file size, format, and acronym. We wrote a program to summarize the 30‐year datasets by various statistical measures, to create a manageable dataset for analysis of long‐term trends. We calculated these statistics for both annual and monthly average values. Statistics were developed for each 30‐year time period by applying a linear regression model to the input rasters, which produced the seven statistics for each variable for each 30‐year time period. The linear regression model used equations from Zar . Change over the historical baseline period 1971–2000 was described as the slope of the regression model multiplied by 30 years. We characterized the variables calculated by the BCM for watersheds and for ecoregions, and compared historical summaries and patterns to future projections.

New Chinatown still exists as a tourist attraction and remains a center of local Chinese American life

Representations of Chinatown defined the cultural possibilities of citizenship for Chinese Americans in the same way the law defined the possibilities of legal citizenship. During the Chinese Exclusion Act era , there remained real political and material stakes to the way Chinatown was popularly portrayed. For at least half a century, media elite and leaders in Los Angeles had portrayed Old Chinatown as a site of tong violence, illicit drug use, and prostitution. These stereotypes of Chinatown were rooted not just in ideas of race, but also in perceived differences of gender and sexuality. Images of vice and corruption were a direct result of popular representations that depicted Chinatown as a community of bachelors living together in an all male social world. The few women in the community were usually portrayed as prostitutes. Thus, Chinatown was popularly linked with a deviant form of sexuality that challenged the normative ideas of the white middle class family united in Christian marriage.3 Furthermore, many white residents of Los Angeles believed that the built environment of the Chinatown contributed to this vice. Stories of an underground network of lairs and secret tunnels facilitated the idea that Chinatown lay outside the vision and control of white authorities. New Chinatown in Los Angeles built on prior efforts by the Chinese American merchant class throughout North America to redefine the place of Chinatown in the popular imagination. Beginning with the Chinese Village at the 1893 World’s Columbian Exposition, continuing on through the reconstruction of San Francisco’s Chinatown following the 1906 Earthquake and fire, Chinese American merchants challenged notions of Chinatowns as disease-ridden slums and refashioned them into spaces of commerce that catered to white tourists. 4 During this time period, Chinese American merchants served as cultural brokers, whose position between white tourists and the vast majority of working-class Chinese Americans allowed them to consciously transform these segregated ethnic communities into sites that presented their own vision of Asia to the outside world. This was done in a way that challenged notions of Chinatowns as manifestations of Yellow Peril while monetizing these sites in a way that allowed Chinese American entrepreneurs to make a living.

In New Chinatown,plant pot with drainage local Chinese Americans merchants took concepts pioneered in San Francisco’s Chinatown and in world’s fair expositions and saw them through to their logical end. In fact, New Chinatown was not a neighborhood at all but a corporation, the stock of which was privately held by a select group in the city’s emerging Chinese American middle class.5 These merchants and restaurant owners maintained complete control over their new Chinatown. From the land on which the business district was built, to the architectural style that accompanied the area’s businesses, to the advertisements that publicized the district in the city’s papers, New Chinatown reflected the desires of its owners to both attract tourist and to challenge the conceptions that had come to dominate Old Chinatown. The opening day festivities of New Chinatown featured appearances by local Chinese American actors who had made a name for themselves in the China-themed films of the 1930s.6 Following the Japanese invasion of Manchuria in 1931, Hollywood began producing a series of Chinese-themed films many of which featured Chinese American performers from the Los Angeles area. The most high profile of these films was MGM’s The Good Earth , a film based on Pearl S. Buck award winning 1931 novel. Present at the opening of New Chinatown were Keye Luke and Soo Yung, Chinese American actors with supporting roles in The Good Earth. Also present was Anna May Wong, the most recognizable Chinese American star of the period. Despite being passed over for a role in The Good Earth, Wong had already appeared in number of high profile films including Thief of Bagdad , Piccadilly , and The Shanghai Express . New Chinatown would soon feature a willow tree dedicated to Ms. Wong. To complete the Hollywood connection, the New Chinatown opening featured an art exhibit by Tyrus Wong, a Hollywood animator who would later work on the classic animated film, Bambi . Despite these connections to Hollywood, in many ways New Chinatown attempted to cast itself as the modern Chinese American alternative to the representation of China seen in films like The Good Earth. The opening gala included flags for both the Republic of China and the United States spread around district.

The parade featured four-hundred members of the Federation of Chinese Clubs, local Chinese American youth, most of whom were American-born who had banded together to raise financial support for China following the outbreak of the SinoJapanese War in 1937.7 At the same time, a number of prominent state and local officials participated in the festivities including Governor Merriam who was then locked in a difficult reelection campaign and who hoped that his participation could would solidify the small but not insignificant Chinese American vote. In these complex and hybrid ways, the founders positioned New Chinatown as a distinctly Chinese American business district, one that reflected the increasingly U.S.-born demographics of the nation’s Chinese American community. New Chinatown was not the only Chinatown to open in Los Angeles in the summer of 1938. Two weekends earlier, less than a mile away, a group of white business leaders headed by philanthropist Christine Sterling had opened their own competing Chinatown, which they dubbed China City.8 If New Chinatown was defined by the ethos of the American-born generation, China City was defined by Hollywood. This was to be a Chinatown that embodied the images that film audiences saw when they entered the theaters to watch Chinese and Chinatown themed films so popular in the 1930s. New Chinatown may have drawn on Hollywood actors to publicize its existence, but China City in many senses was a Hollywood production. Like New Chinatown, this was a business district not a neighborhood, but unlike New Chinatown, China City adhered much more closely to the Orientalist images of China produced by Hollywood cinema. In China City visitors could attend The Bamboo Theater featuring continuously running films about China. They walk through a recreation of the set for the House of Wang from The Good Earth. Many of the Chinese Americans employed in China City had also worked as extras on the MGM film.

And so tourists might encounter some of the very people that had seen in the background shots of the film. In China City, tourists could pay to be drawn around by rickshaw. According to the Los Angeles Times, visitors to China City could purchase “coolie hats, fans, idols, miniature temples, and images.”9 One of the shops was owned by Tom Gubbins,pots with drainage holes a local resident of Chinatown who supplied Hollywood with costumes and props for Chinese themed films and connected local residents with jobs as extras. In both New Chinatown and China City, Chinese Americans utilized Chinatown to mediate dominant ideas about race, gender, and nation.10 These two Chinatowns were more than physical sites for members of ethnic enclave to make a living. They also represented the apparatus through which the local Chinese American community performed their own cultural representations of China and Chinese people to crowds of largely white visitors. In more ways than one, Chinese American performances in these two districts were the culmination of a fifty year process through which the Chinese American merchant class challenged Yellow Peril stereotypes by transforming China and Chinese culture into a nonthreatening commodity that could be sold to white tourists. Examining a period of national debate over immigration and U.S. citizenship, this dissertation, “Performing Chinatown: Hollywood Cinema, Tourism, and the Making of a Los Angeles Community, 1882-1943,” foregrounds the social, economic, and political contexts through which representations of Chinatown in Los Angeles were produced and consumed. Across five chapters the dissertation asks: To what extent did popular representations and economic opportunities in Hollywood inform life in Los Angeles Chinatown? How did Chinese Americans in Los Angeles create, negotiate, and critically engage representations of Chinatown? And in what ways were the rights of citizenship and national belonging related to popular representations of Chinatown? To answer these questions, the project examines four different “Chinatowns” in Los Angeles—Old Chinatown, New Chinatown, the MGM set for The Good Earth, and China City—between the passage of the Chinese Exclusion Act in 1882 and its repeal in 1943 during the Second World War. The relationship between film and Chinatown stretches back to the 1890s to a moment when both featured as “urban amusements” for a newly developing white urban public audience in places like New York, and yet the connection between Chinatown and film reached its nadir in Los Angeles in the 1930s during the height of the Hollywood studio system.

San Francisco and New York Chinatown may have been larger in size and attracted more tourists, but Los Angeles Chinatown and the Chinese American residents of the city played a more influential role in defining Hollywood representations of China and Chinese people than any other community in the United States. Long before the outbreak of World War II, the residents of Los Angeles Chinatown developed a distinct relationship to the American film industry, one that was not replicated anywhere else during this period. Despite this distinct relationship, there have been no dissertations or academic books published about Los Angeles Chinatown and its relationship to Hollywood cinema. Asian American historians who work on Los Angeles have for the most part focused on the city’s Japanese American population. 11 Sociologists of the region have focused on Asian Americans in the ethnoburbs of the San Gabriel Valley.12 Film studies scholars who examine Asian American representations have focused primarily on the films themselves or else on writing biographies of a few well-known Hollywood performers such as Anna May Wong, Philip Ahn and Sessue Hayakawa. 13 With professional academics focused on different but related topics, nearly all of the research that has been done on the history of Chinese Americans in Los Angeles and their relationship to Hollywood film has been completed by community historians at organization like the Chinese Historical Society of Southern California and the Chinese American Museum of Los Angeles. 14 Most of these community historians are volunteers who research and write because of their passion for the subject matter. Many also have family ties to this history. This familial link is the case with the most popular retelling of this history, Lisa See’s novel Shanghai Girls. Lisa See is a descendant of the Chinese Americans who lived in Los Angeles before World War II. 15 In contrast, professional academics for their part have all but ignored this history. What accounts for the relative absence of scholarship on the relationship between the Chinese American community of Los Angeles and the Hollywood film industry? Certainly, the topic of Chinatown remains one of the most thoroughly studied aspects of the Asian American experience. Alongside scholarship examining the political and legal apparatuses used to exclude Asian people from the US, Chinatown is one of the few topics in Asian American studies that elicited significant scholarly consideration before the birth of the field in the late 1960s.16 More than a dozen monographs have been produced examining various aspects of Chinatowns from the fields of sociology and history. In the popular realm, interests in Chinatown as a site of tourism and as a cultural representation also remains strong. In addition to the long-standing interest in Chinatown as an academic topic, the material traces of this history remain highly visible. Films like Shanghai Express , Lost Horizon and The Good Earth , which all employed Chinese American background performers, are available for home viewing. Photographs from Chinatown performances of this period including those of the Mei Wah Drum Corps have been digitized and are available on-line through archives such as those of the Los Angeles Public Library and their Shades of L.A. project. And yet, the distinct theoretical, methodological, and disciplinary tenants of sociology social history, and film studies have limited the types of questions scholars have asked about Chinatown and film, and by extension the types of conclusions these scholars have drawn.

Diabetic treatment under the new initiative had objectives similar to those of the asthma component

Utilizing the collaborative technique enabled the primary care practice teams to make many changes in the way they cared for patients with chronic illness. It was concluded that the evidence suggested improvements in patient outcomes resulted from this intervention.Subsequent to the late 1990s, more evidence in support of the model appeared. Due to the general popularity of the model, in 2001 ICIC’s three-year Targeted Research Grants Program provided funding for peer-reviewed, applied research that focused on addressing critical questions about the organization and delivery of chronic illness care within health systems. Nineteen projects were selected, providing grants totaling approximately $6 million dollars backed by the Robert Wood Johnson Foundation. The research included evaluations of interventions such as group visits or care managers, observational studies of effective practices, and the development of new measures of chronic care. The settings for these studies were primarily community or private health care. Identifying the types of organizations that fare better at improving outcomes for particular disease states continues to be a question for the literature . The not-for-profit and private sectors continue to embrace the CCM, and organizations like the ICIC continue to devote resources to its development and ability to improve on patient health outcomes. In 2001, the Institute of Medicine published what is now considered a seminal report in the field: Crossing the Quality Chasm: A New Health System for the 21st Century . In the report, the Institute of Medicine outlines six goals for the transformation of health care in the United States. The report specifically references the work of ICIC and calls upon lawmakers at the federal level to make chronic disease care quality improvement a priority issue. Following suit,vertical plant growing the National Committee on Quality Assurance and the Joint Commission, two nationally recognized not-for-profit entities that set standards for care in the United States, developed accreditation and certification programs for chronic disease management based on CCM .

At the same time, both the Joint Commission and the National Committee on Quality Assurance have released additional accreditations in the patient centeredness approach of the patient centered medical home. These new certifications continue those proposed by CCM and advance the work of these pioneers. Joint Commission’s Primary Care Medical Home looks at organizations that provide primary care medical services and bases its certification on elements that enable coordination of care and increase patient self-management. This is a model of care based directly on the foundational work provided by CCM. Additionally, CCM currently serves as a foundation for new models of primary care asserted by the American College of Physicians and the American Academy of Family Practice. In 2003, the ICIC program administrators convened a small panel of chronic care expert advisors and updated CCM to reflect advances in the field of chronic care from both the research literature and from the experiences of health care systems that implemented the model in their quality improvement activities. These programs were phased in during early June 2009. The asthma component sought to improve asthma care . Additionally, it had the objective of improving asthma outcomes .The objectives of the diabetes component of the program differed from the asthma module in that the program did not focus on the reduction of diabetes related deaths. Practice reviews did not identify diabetics as having an abnormally high mortality rate; however, improvements were sought in the numbers of hospitalizations and specialist treatment visits. While both chronic care conditions were intriguing areas of study for the program’s implementation, this paper focuses on the diabetic portion of the implementation because the earlier phase of asthmatic treatment did not result in sufficient data to enable proper analysis. During the preparatory stage of the Chronic Care Initiative , a not-for-profit consulting organization with correctional health care and learning collaborative experience was selected to assist the California Prison Health Care Services project team.

A statewide system assessment was conducted between January and April in 2008. Given the small window of opportunity under the federal receivership to accomplish the turnaround plan of action’s objectives, a very aggressive work plan and timeline was developed. To develop the work plan and identify potential problem areas, the team first established a list of limiting factors relevant to the operational environment. It was believed that in developing this list, the institutionalized nature of the organization and its key players could be catalogued. The factors could be utilized to address areas in which proactive focus and intervention efforts would be required in order to enable successful change on the part of long-tenured civil servants. The long-tenured employees were not capable of seeing all the flaws of their own routinized behavior because they had known no other ways. The theory under which the team operated was adopted from the above and related research on organizational change. Fernandez and Rainey discuss managing change once the change plan has been implemented and tasks are underway. To be innovative, the CCI team sought ways to stay ahead of the change curve and thus looked to capture variables of interest related to places where proposed change could get stuck by administrators unable to see how their usual behaviors and actions prevented successful change management. As a result, the plan that was developed included tasks specific to the implementation of the chronic care model in the health care setting. The team, in its proactive approach to implementation, identified aspects of organizational behavior that were important to track on the management side and designed methods to track and trend this behavior. Once tracked and trended, these data were used to develop interventions on managers to motivate their behavior in ways the team felt would enable the long-term success and sustainability of the changes at hand. Further, the catalog of behavior or aspects within the environment that were known to have likely deleterious effects on the proposed changes was used to redevelop the private sector chronic care model itself.

Revisions to the private-sector version of the chronic care model were necessary to fit the model for a custodial setting. With health care needs put behind those of security, the program architects found it necessary to modify and enhance aspects of the elements of the model. The first and perhaps least profound change was to the name of the program—to “Chronic Disease Management Program”—to avoid the perception that the inmate population would receive levels of care provision higher than enjoyed by the community at large because the program actually aimed to achieve a reduction in the cost of care while maintaining clinical efficacy of delivery and treatment. As a solely political move, it set the stage for the requirements of alterations to the rest of the model. Subsequent to discussions concerning the program’s name, each of the model’s standard elements were analyzed and repacked to fit the correctional environment. Due to the lack of learning collaborative and quality improvement information in the correctional health care literature, an innovative two-phase approach to implementation was developed. Phase 1 focused on piloting the learning collaborative strategy, developing a modified diabetes-change package for a correctional environment, and establishing the pilot sites to test the model. Phase 2 had the objectives of statewide implementation of the tested and approved approach from the pilot, while additionally moving on to the next chronic condition for the initial six pilot sites. After identifying the pilot sites,vertical farming the initiative began with intensive, multidisciplinary work sessions. Subsequent work sessions were performed using an enhanced learning collaborative strategy. Collaborative sessions were planned quarterly for the first year with teams from different sites attending four, two-day learning sessions separated by action periods. An intensive skills-based course on quality improvement was embedded into the learning sessions. Additionally, virtual learning workshops were inserted between the learning sessions to enable each collaborative to build workforce competencies on quality improvement technical skills. At the end of the learning sessions, pilot site teams folded into three regional learning collaboratives involving all 33 prisons to commence Phase 2 activities. The pilot-site champions served as presenters or mentors to the new sites during Phase 2, in a “train-the-trainer” approach. This approach required an initial round of training, and those trained during the first round were then deployed to train the rest of the staff. Figure 3 shows the culturally embedded barriers to implement CCM, as determined by the team. These obstacles are described in greater detail in the following section. They represent the targeted aspects of the model, which, due to their private-sector beginnings, would not fit into the custodial setting without modification. The re-adaptation of the model to fit the public sector, and more specifically the custodial environment within a public agency, was designed over several months, and its output was the subject of lively debate. The price for the program’s implementation failure was greater than the sum of its investment of time and resources. As many of the receiver-level clinical managers were brought into the receivership organization as employees of a new entity, results were expected. Because those expectations for results were high, the preparation for program implementation was carefully planned. It was understood that the receiver’s efforts were focused on remolding institutionalized patterns of action.

Initial efforts began with the breaking down of the six CCM elements into digestible tasks and deliverables within a project plan. A discussion then ensued concerning the parts of CCM that would not fit into the existing organization due to cultural barriers. Part of the debate mentioned earlier included the discussion among administrative staff with extensive CDCR experience, which provided insight about the barriers to a successful implementation in the custodial setting.A successful adoption of CCM is dependent on the visible support at all levels of the health care organization, starting with the senior managers. The federal receivership was established to provide the highest level of executive leadership support. The fiscal constraints of the state of California during the period of time when the program was implemented precluded the full adoption of CCM in the prison health care system. Clinical management that would otherwise have been dedicated to the coordinated care team was reduced. To increase managers’ visibility in relation to this program, attention was placed on coordination of care activities. This occurred at all levels, with headquarters-based administrative staff taking the lead in establishing the importance of the program by providing in-service trainings as well as on-site follow up support. In support of learning collaboratives, clinical administrators and supervisory staff were brought to headquarters facilities to participate in interactive sessions. It was felt that the overall effect of change in organizational behavior would occur once staff worked in collaboration to define new processes. To create visible leaders, managers had a role in shaping CCM implementation in a manner that was personally meaningful to them and would thus empower them. As the prison health care system is a single-payer, closed health care system, the potential to adopt evidence-based quality improvement strategies and practice guidelines is somewhat greater than would be the case in other health delivery settings. Because staff in a closed system is labor internal to the organization, the establishment of guidelines for these staff is an enabling factor for the full adoption of CCM policies with accountability for adherence to the model and results. The extent to which continuous, internally based labor learns and buys into the new policies and procedures equates to the extent to which sustainability of new methods can be achieved. In open health-delivery systems, clinical staff members are treated more as vendors than as internal staff. Because vendor relationships are managed differently than internal staff are, adherence to internal policies and procedures is more difficult to achieve. Some prisons institutionalized the use of temporary staff due to the relative ease with which these labor resources can be procured. Though temporary personnel cost was typically one and a half to two times the expense of a full-time employee, given the remote location of some facilities temporary staffing was preferred. This practice became institutionalized; as a supervising declared during interviews, “it was just the thing to do, because who has time to recruit and interview when using [temporary staff] was what everyone did.” She went on to note that “we certainly planned our staffing needs and secured the positions but look where we are . . . doctors can go [to the institution literally next door] and earn almost 25 percent more.