Tag Archives: hydroponic

This has important practical implications for agricultural design applications

The grape cases, which had high anisotropy in both the leaf inclination and azimuthdistributions, did incur significant errors due leaf anisotropy for the 1D model. If leaf azimuth is uniformly distributed, this effectively reduces the impact of anisotropy in leaf inclination on the projected area fraction G. Since a leaf with a certain elevation angle could be parallel to the sun at one azimuth and perpendicular to the sun at another, an integration over all azimuths can smear out the effects of leaf inclination alone. As in the virtual canopies of this study, field measurements have shown that leaf inclination distributions are usually highly anisotropic. The azimuthal distribution of leaves may be strongly anisotropic within a single plant, but for relatively dense canopies, the azimuthal distribution is often fairly isotropic. In these cases, the assumption of leaf isotropy is likely to result in minimal errors. However, sparse, row-oriented crops such as vineyards may have highly anisotropic azimuthal distributions, in which case it may be necessary to explicitly calculate G based on measurements. These types of canopies are becoming increasingly prevalent in agricultural applications, due in part to the improved access to mechanical harvesters that a trellised or hedgerow canopy provides.Plant spacing and the resulting heterogeneity had the most pronounced effect on errors resulting from the use of Beer’s law. For the Grape N-S case, the assumption of heterogeneity resulted in an overestimation of the total daily absorbed radiation by 28%, 30%, and 36% on Julian days153, 232, and 305, respectively, with larger instantaneous over estimation near midday. For the Grape E-W case, round planter pot the assumption of heterogeneity also resulted in overestimating the total daily absorbed radiation by 74%, 51%, and 5% on Julian days 153, 232, and 305, respectively.

This was not simply an effect related to L, as was illustrated by the two potato cases. By simply rearranging the potato plants from a uniformly spaced into a row-oriented configuration, errors in the 1D model increased substantially. It is possible that the effect of horizontal heterogeneity can vary in the vertical direction, which appeared to be the case with the Corn canopy. This significantly altered the performance of the 1D model at any given height, although the canopy was dense enough overall that the 1D model performed well when predicting whole-canopy radiation absorption. This could have important implications if the radiation model is coupled with other biophysical models such as a photosynthesis model. The response of photosynthesis to light is nonlinear and asymptotic, so although whole-canopy absorption may be well-represented in some cases by a 1D horizontally homogeneous model, it is unclear if that will result in significant errors in total photosynthetic production given the non-linearity of its response to light. A limitation of this study is that results are only applicable under clear sky conditions. However, results can provide some insight regarding diffuse sky conditions by simultaneously considering all canopy geometries and simulated sun angles. Under a uniformly overcast sky, equal energy originates from all directions. A particular combination of sun angle and leaf orientation bias was required in order to observe a pronounced effect of leaf anisotropy. Thus, for diffuse solar conditions, it is speculated that the impact of leaf anisotropy will be decreased. Sun angle had an important effect on the instantaneous impact of leaf heterogeneity, and most commonly it was observed that low sun angles resulted in a decreased impact of heterogeneity. Therefore, it is likely that highly diffuse conditions will reduce the impact of heterogeneity near midday because a significant fraction of incoming radiation will originate from directions nearer to the horizon. Estimating light interception with Beer’s law is based on the assumption that canopies are homogeneous.

This inherently means that the rate of radiation attenuation along a given path is linearly related to the flux at that location. As the canopy becomes sparse, there are pathways for radiation propagation that allow radiation to penetrate the entire canopy without any probability of interception, which fundamentally violates the assumptions behind Beer’s law or a turbid medium. Therefore, the non-random leaf dispersion in canopies limits the ability of Beer’s law to link light interception to simple bulk measures of plant architecture. It is well-known that this heterogeneity or “clumping” of vegetation usually results in decreased radiation interception as compared with an equivalent homogeneous canopy. A common means of dealing with this problem without significantly increasing model complexity is to add a “clumping coefficient” W to the argument of the exponential function in Beer’s law. While this is a simple and practical means of reducing the amount of radiation attenuation predicted by Beer’s law, the challenge in applying the clumping coefficient approach is that W is a complex function of nearly every applicable variable, and thus is it is difficult to mechanistically specify. Another approach is to use a model that explicitly resolves plant-level heterogeneity, as it may not be necessary to explicitly resolve every leaf if within-plant heterogeneity is small. Row orientation played an important role when estimating light interception from Beer’s law, particularly when the rows were widely spaced. For sparse, row-oriented canopies, the effective path length of the sun’s rays through vegetation can change dramatically with changes in sun azimuth. For East-West rows, absorption is significantly reduced early and late in the day because the rows are close to parallel with the sun’s rays, whereas North-South rows are perpendicular to the sun at this time. As the day of year progresses further from the summer solstice, the sun spends more time closer to the horizon and thus the impact of heterogeneity in an East-West row orientation increased. For the East-West row configuration, G and light interception were surprisingly constant throughout much of the day, which resulted in 41% and 36% less absorption on Julian days 153 and 232, respectively, compared to North-South rows.

In some climates, it may be desirable to maximize sunlight interception, whereas in others it may be desirable to mitigate effects of excess sunlight to reduce temperatures and water use.Despite the simplified assumptions in Beer’s law regarding scattering, there was good agreement between predicted radiation interception using the 1D and 3D models in the PAR band. Scattering did not significantly influence light interception in this band because most of the incident radiation received by individual leaves was absorbed. However, in the NIR band, scattering introduced significant over estimation of absorption using the standard 1D model, since leaves are poor absorbers in this band. Using an ad hoc correction to account for reflection only reduced this over estimation of absorption. An additional correction to account for both reflection and transmission resulted in over correction, and a net under prediction of total radiation absorption.The objective of this work was to evaluate common assumptions used in estimating radiation absorption in plant canopies, namely assumptions of homogeneity or isotropy of vegetation. Our results demonstrated that for relatively dense canopies with azimuthally symmetric leaves, a 1D model that assumes homogeneity and isotropy of vegetation generally produced relatively small errors. As plant spacing became large, the assumptions of homogeneity break down and model errors became large. In the case of a vineyard with rows oriented in the East-West direction, errors in daily intercepted radiation were up to 70% due to heterogeneity alone, round pot for plants with much larger instantaneous errors occurring during the day. If leaves were highly anisotropic in the azimuthal direction, there was also the potential for large errors resulting from the assumption of vegetation isotropy which had the potential to increase errors above 100%. Day of year had an impact on model errors, which was that overall errors tended to decline with time from the summer solstice. In cases of canopies where the plant spacing starts to approach the plant height, it is likely necessary to use a plant-resolving radiation model in order to avoid substantial over prediction of absorbed radiative fluxes. Additionally, if vegetation is highly anisotropic in terms of both elevation and azimuthal angle distributions, it is also likely necessary to explicitly calculate the projected area fraction G based on measurements and the instantaneous position of the sun.Recent shifts in climatic patterns have influenced the frequency, timing, and severity of heat waves in many wine grape growing regions, which has introduced challenges for viticulturists. Growing the same varieties under these altered climatic conditions often requires mitigation strategies, but quantitative, generalized understanding of the impacts of such strategies can be difficult or time consuming to determine through field trials. This work developed and validated a detailed three-dimensional model of grape berry temperature that could fully resolve spatial and temporal heterogeneity in berry temperature, and ultimately predict the impacts of potential high berry temperature mitigation strategies such as the use of alternative trellis systems.

A novel experimental data set was generated in which the temperature of exposed grape berry clusters was measured with thermocouples at four field sites with different trellis systems, topography, and climate. Experimental measurements indicated that the temperature of shaded berries closely followed the ambient air temperature, but intermittent periods of direct solar radiation could generate berry temperatures in excess of 10C above ambient. Validation results indicated that by accurately representing the 3D vine structure, the model was able to closely replicate rapid spatial and temporal fluctuations in berry temperature. Including berry heat storage in the model reduced the errors by dampening extreme temporal swings in berry temperature.Increasing temperatures and temperature variability associated with a changing climate have become a major concern for grape producers due to the sensitivity of grape quality to climate, particularly in wine grape production. Short-term temperature extremes associated with heat waves, along with longer-term shifts in seasonal temperature patterns are known to create significant challenges in managing grape quality. Diurnal fluctuations in solar irradiance and air temperature have been shown to affect amino acid and phenylpropanoid berry metabolism at hourly time scales. Elevated temperatures during daily or weekly time periods have been shown to decrease anthocyanin concentration around veraison. Furthermore, the duration of the elevated temperatures not only has an effect on berry composition but also on berry skin appearance. Exposed berries can be damaged by sunburn, and even a few minutes of high temperature exposure can result in cellular damage. Moderate temperatures can also result in berry injury or death after long-term exposure. Grape producers have begun to implement a number of canopy design and management strategies in an attempt to mitigate the negative effects of elevated berry temperatures, including the use of shade cloth, trellis design, and cluster height. However, grape berry microclimate is complex and highly heterogeneous due to interactions between the vine architecture and the environment, making it difficult to understand and predict the integrated effects of mitigation efforts. Experimental field trials are complicated by the fact that measurement of light and temperature at the berry level is labor-intensive and expensive. Furthermore, the relatively slow development of grapevine systems means that field trials are costly and may require many years of data collection. Because it is not feasible to independently vary every parameter that determines berry temperature in field experiments , crop models provide a means for understanding, and ultimately optimizing, how grapevine design and management practices can be used to mitigate elevated berry temperatures. Previous process-based models have been developed to predict berry radiative fluxes and berry temperatures from environmental parameters. However, in these models the calculation of absorbed radiation and the parameters to represent specific geometrical canopy structure are often simplified. Therefore, the models cannot account for the vertical and horizontal variability within the cluster or canopy, making it difficult to represent different design or management choices such as using altered trellis designs or pruning practices. Previous work has developed models for individual grape and apple fruits, and the work of Saudreau et al. successfully developed a 3D model of apple fruit temperature. However, to the authors’ knowledge, previously developed 3D grapevine structural model have yet to be coupled with a physically-based berry temperature model. This work develops and tests a new 3D model for grape berry temperature based on the Helios modeling framework. The berry temperature model was validated using a unique data set that spans four different canopy geometries.

Compliance and potential adverse symptoms were monitored by daily self-reported logs

Postmenopausal women aged 50 to 70 years with BMI of 25 – 40 kg/m2 were enrolled. Postmenopausal status was defined as a lack of menses for at least two years or at least six months with a follicle-stimulating hormone level of 23 – 116.3 mIU/mL. Other inclusion criteria were an overall body weight equal to or greater than 100 pounds, and agreement to comply with all study procedures. Exclusion criteria included BMI greater than 40 kg/m2 , blood pressure greater than or equal to 140/90 mm Hg, abnormal values from a lipid panel, complete blood count , or comprehensive metabolic panel , use of prescription medications other than thyroid, daily use of anti-coagulation agents such as aspirin and non-steroidal anti-inflammatory drugs, or use of dietary supplements other than a general formula of multivitamins/minerals that provided up to 100% of the recommended dietary allowances. Additional exclusion criteria were vegetable consumption greater than or equal to 3 cups/day, fruit consumption greater than or equal to 2 cups/day, fatty fish intake greater than or equal to 3 times/week, dark chocolate intake greater than or equal to 3 oz/day, coffee and/or tea intake greater than or equal to 3 cups/day, alcohol intake greater than 3 drinks/week. Women were also excluded if they followed a non-traditional diet , engaged in routine high-intensity exercise, self-reported diabetes, renal or liver disease, malabsorption or gastrointestinal diseases, cancer within the last five years, or heart disease, including cardiovascular events or stroke. After determining initial eligibility through telephone screening, participants were further screened at the laboratory in the morning after an overnight fast. After informed consent was obtained, black plastic plant pots anthropometric measurements were taken, including body weight, height, and waist circumference.

Blood pressure and resting heart rate were measured three times, five minutes apart, after 15 minutes of sitting quietly. Volunteers also completed a diet and health habits questionnaire . A fasting blood sample was collected for a CMP, a CBC, and a lipid panel . If participants reported menses occurring within two years prior to the telephone screening, FSH was measured. Volunteers were excluded if their low-density lipoprotein value was greater than or equal to 190 mg/dL, or for those with zero to one major cardiovascular risk factors apart from high LDL cholesterol if their LDL was greater than or equal to 160 mg/dL, for those with two major cardiovascular risk factors apart from elevated LDL cholesterol greater than or equal to 130 mg/dL, or for those with two major cardiovascular risk factors apart from high LDL cholesterol and a Framingham 10-year risk score of 10 to 20% . Study I was a single-arm, four-week trial . Baseline values were collected at study visit 1 , which then began a run-in period of two weeks during which no mangos were consumed. At SV1, baseline anthropometry, blood pressure, PAT, and blood was collected, and taken again two hours later. At the end of two weeks, study visit 2 began with baseline measures taken, followed by ingestion of 330 gm of pre-packaged, fresh, frozen Ataulfo mangos, and data were collected two hours later. Participants then returned home with a 14-day supply of pre-packaged mangos and instructed to consume 330 gm of mangos daily, with 165 g eaten before noon, and the other half consumed in the evening. Two weeks later, study visit 3 ensued, which followed the same protocol as SV2 . Water was allowed ad libitum during all study visits. Prior to each study visit, participants were instructed to refrain from strenuous exercise for 24 hours before arriving at the laboratory to reduce the potential impact on PAT measurements.

Two 3-day food records were collected, once between SV1 and SV2, and again between SV2 and SV3. The records were analyzed using the Food Processor software . Study II was based on the findings from study I. This single-armed trial design is shown in Figure 2. After an overnight fast, at SV1, anthropometry, blood pressure, heart rate, and blood samples were collected at baseline and at one-hour and two-hour time points. At least 48 hours later, at SV2, baseline measures were taken, followed by ingestion of 330 gm of pre-packaged, fresh, frozen Ataulfo mangos, and data were collected 1h and 2h after intake. After at least two days, at SV3, baseline measures were taken, followed by ingestion of 113 g of white bread, which contained calories and carbohydrates similar to those found in 330 gm of mangos, and data were collected 1h and 2h after ingestion. The inclusion and exclusion criteria were the same for both study I and II. Participants were instructed to refrain from consuming additional mangos before SV1 and throughout their enrollment. Procedures were performed at the same time of the day to minimize circadian effects. The screening and interventions were conducted at the UC Davis Ragle Human Nutrition Research Center. The UC Davis Institutional Review Board approved the protocol, and the study was registered at ClinicalTrials.gov . Microvascular function was assessed by PAT . After resting in a supine position for 30 minutes, a non-invasive, sterile finger probe was fitted to each middle finger. A manual blood pressure cuff was placed on the distal forearm of the non-dominant arm. A baseline reading of peripheral arterial tone was recorded, and then the blood pressure cuff was inflated to a supra-systolic level approximately 60 mmHg above systolic blood pressure to induce occlusion of blood flow for five minutes. Then, the pressure was released, resulting in reactive hyperemia. Two consecutive blood pressure measures were taken immediately before and after the PAT assessment.

The PAT software then automatically calculated the reactive hyperemia index , Framingham Reactive Hyperemia Index , augmentation index , and AI adjusted to 75 beats per minute . Whole blood was collected and rested at room temperature for 15 minutes before centrifugation at 200 x g for 10 minutes. Half of the serum was then aliquoted for use as platelet rich plasma , and the remaining serum was further centrifuged at 1500 x g for 15 minutes to provide platelet poor plasma . An average platelet count of PRP was measured with a hemocytometer. Depending on the platelet number in the sample, a specific ratio of PRP and PPP was combined to create a test sample with a final cell count of 250,000 platelets per µL. Then, the combined plasma was held at room temperature for 20 minutes, after which platelet aggregation was assessed . After calibration using sterile water, 500 µL of the previously prepared combined plasma was placed into glass cuvettes and incubated at 37 ºC for three minutes. Collagen was then added to the PRP to induce aggregation while the PPP was left untouched and served as a control. The collagen was added in separate cuvettes at either 1 or 3 µg collagen per 1 ml of PRP. The changes in aggregation were measured for amplitude , slope , lag time , and area under the curve . Microvascular function, calculated by the RHI, was the primary outcome for study I. Sample size calculation was determined based on a previous study from our laboratory assessing the effects of walnuts on vascular function.23 Microvascular function values were assumed to have a standard deviation of 0.5. Therefore, a sample size of 20 was needed to detect significant differences in RHI with 80% power at a 5% level of significance. Data were checked for normality and homogeneity of variance using the Shapiro-Wilk or Brown-Forsythe tests. The two-week differences in microvascular function, anthropometric and biochemical measures, and nutrient intake were analyzed using paired-t tests. The 2h change values for microvascular function, BP, platelet aggregation, and blood glucose were analyzed by one-way repeated measure Analysis of Variance using treatment as the main factor and participant ID as the random effect. For study II, the acute changes from baseline in BP, blood glucose, and insulin were analyzed by two-way RM ANOVA using time and treatment as the main factors and participant ID as the random effect. For main effects, Tukey’s tests were used for post-hoc analysis, with student t-tests used to determine significance within group pairs. A p < 0.05 was considered statistically significant. Statistical analyses were performed with JMP version 16 . During the two-week mango intake period, the estimated increases in soluble fiber, total sugar, monosaccharides, disaccharides, β-carotene, vitamin C, vitamin E, folate, were expected, black plastic garden pots compared to the reported intakes during the run-in, no-mango period. Despite these increases in carbohydrates during the mango feeding period, fasting glucose and plasma lipid levels, body weight, and waist circumference, did not change. Some animal and human studies suggest that mango intake may benefit blood glucose control. The blood glucose levels after an oral glucose tolerance test were significantly decreased in obese Wistar rats fed a high-fat diet and supplemented with 35 ml of mango juice with or without peel extract for seven days, compared to controls.

Another study reported a significant decrease in fasting blood glucose in diabetic but not normal male Wistar rats 30 days after consuming a diet mixed with dried Tommy Atkins mango powder at 5% of diet weight. The RHI, fRHI, AI, AI75, and platelet aggregation did not differ two weeks after daily mango intake, which may have been due to the relatively short intervention period. In a randomized double-masked, placebo-controlled, four-week trial among healthy individuals aged 40-70 years with a BMI of 19-30 kg/m2 , the RHI as measured by PAT was significantly increased after daily intake of 100 mg intake of unripe mango fruit powder made from the Kili-Mooku cultivar, compared to their baseline levels. When fed the same powder at 300 mg per day, the RHI was significantly increased, but only among individuals with compromised endothelial function. Another study reported that the daily intake of 400 g of fresh frozen Ataufo mango pulp for six weeks significantly decreased SBP in lean individuals aged 18-65 with BMI 18-26.2 kg/m2 , and significantly decreased hemoglobin A1C, plasminogen activator inhibitor-1, interleukin-8, and monocyte chemoattractant protein-1 in participants with BMI > 28.9 kg/m2 . While intriguing, the results need interpreted cautiously since the BMI numbers were not the standard values used for healthy, overweight, and obese criteria. The SBP was significantly reduced in the first two hours after the first mango intake in Study I, compared to baseline or run-in values. In contrast, the SBP was unchanged in Study II at one and two hours after mango intake. The discrepancy between values from Studies I and II may be due to a low number of participants in study II. However, the change in PP was significantly reduced 2h after mango intake in both study I and II. Importantly, the PP also changed after white bread intake, suggesting that the response might be due to a postprandial effect. In study II, although the postprandial changes of SBP and DBP were not significantly different between the mango and white bread groups, the HR changes 1h and 2h after white bread intake were significantly increased compared to no mango intake. This finding is consistent with a report that both supine and standing HRs were significantly increased 1h and 3h after a 790 kcal meal in the morning after an overnight fast. However, the calorie content in mango and white bread in this current study was only 298 kcal. Studies regarding the consumption of fruits and postprandial BP and HR are scarce. Future research is encouraged to investigate whether fruit intake will induce similar hemodynamics as meals. In study I, the 2h change in blood glucose was not different between mango or no mango intake, despite the difference in sugar intake from the fruit. This observation was reinforced further in study II, where the blood glucose was significantly increased 1h after white bread intake but not after eating an isocalorically-matched amount of mango. The insulin level was also significantly increased 1h after white bread intake compared to 1h after no mango or mango intake. In addition, although the 2h change in blood glucose after eating white bread returned to a level similar to baseline values, the 2h change of insulin was still significantly elevated compared to the 2h value seen in the no mango group. These data are consistent with other reports regarding mango consumption and glucose regulation. For example, in obesity-prone mice fed a high-fat diet, the fasting blood glucose, insulin, and homeostatic model assessment for insulin resistance score were significantly decreased after 10 weeks of mango fruit powder intake at each of three levels .

Treating wastewater for posterior use is another source of water for California’s farms

Factors such as the degree of diversification in a region’s economy, prosperity in the region, as well as the size, number, and conditions of the transfers play a role in influencing the magnitude of the regional impacts. Concerns over third-party effects were instrument in IID’s decision to put conditions on its water transfers under the QSA—they limited the extent of land fallowing and required that water transfers to eventually be sourced from on-farm conservation. Strategies to combat such concerns over third party effects likely involve a variety of approaches including social programs and support for land repurposing . Land repurposing as a response to the likely reductions in irrigated cropland is gaining significant attention in California . Developing solar energy, restoring desert and upland habitat, or riparian and wetland areas, expanding water-limited crops, or developing water-efficient urban development in formerly irrigated areas are all possible options for repurposing . In addition, conservation incentive programs could help mitigate the impacts of fallowing on ecosystems and people, and redistributing irrigation water onto fewer irrigated acreage should consider ecosystem services of alternative uses to maintain multifunctional landscapes in a changing climate.Augmenting water supplies through importing water from other regions, or further tapping into local surface or groundwater supplies, are limited at best. Yet supply augmentation options do exist, albeit likely at a higher cost . A portfolio of options needs to be considered, plastic grow pots including better capture and use of flood water, maintaining healthy soils, and more effective monitoring, surveillance, and response to extreme weather events.

Groundwater recharge , water recycling and reuse, and desalination provide opportunities to enhance supply. Increasing the operational efficiency of surface or groundwater storage and transport can also increase water availability. Last, water trading can help reallocate water supplies to reduce costs of both temporary and long-term shortfalls . Groundwater recharge. Managed aquifer recharge is the intentional recharge of water to aquifers for subsequent recovery or environmental benefit . MAR practices have been used in California in its operation of water banks–aquifers used for underground storage–and to avoid saltwater intrusion in aquifers in coastal zones. There is now renewed interest in developing MAR efforts to catch flood flows, especially for its low financial and environmental cost compared to other alternatives . The California Department of Water Resources found that an annual average of almost 2,000 hm3 is available for recharge using current infrastructure without interfering with environmental regulations. Adding new infrastructure could increase recharge opportunities in nearly all California regions over time, and particularly in the Sacramento Valley where significant opportunities exist . The flows that comprise the recharge are often available in large magnitudes for short periods and thus present challenges due to regulation and infrastructure. Current storage and conveyance infrastructure as well as operational and regulatory practices need to be expanded and improved to make full use of this water supply augmentation option. Although most water volumes have been recharged in dedicated basins in California, there is also much interest for on-farm recharge . By recharging water directly on farms, current irrigation infrastructure could be used, thus reducing the costs. Institutional challenges include lack of incentives for farms to accept flows because the individual farm benefits may be small relative to the public benefits.

Additionally, some crops likely are better suited for this than others, e.g., crops that are dormant in winter–such as almonds and vines–may not be negatively impacted by this practice. Additional research on recharge issues is needed to better understand the effects of on-farm recharge on crop yields, water quality, and soil health, among other factors .Wastewater recycling . The California State Water Resources Control Board estimates that 900 hm3 of wastewater was recycled in California in 2020 , with 250 hm3 being used for agriculture. In 2020, the state published its California Water Resilience Portfolio , which aims to recycle and reuse 3,100 hm3 over the next decade. Most of the wastewater in the CV is already being used with further treatment by downstream users or the environment. Therefore, the most promising locations for wastewater reuse and recycling are in Coastal California, where much of the wastewater is not being reused. Furthermore, while wastewater quality varies significantly across sources with more highly polluted water needing more costly treatment, some of those costs might be avoided for some farm uses . Desalination. Salty water can be treated to make it suitable for urban or agricultural use. In California and other western states, desalination has mostly been used to remove salts from brackish water. The lower constituent concentrations in brackish water make the process less costly than ocean desalination and, thus, more feasible for farm use. Currently, 14 seawater desalination plants are spread across California producing 110 hm3 , with another 23 brackish groundwater desalination plants producing 173 hm3 . There are plans to desalinate another 35 hm3 of seawater by 2030 and 104 hm3 of brackish water by 2040. These quantities contribute a small fraction to the overall water supply in California. Also, the infrastructure and energy costs of seawater desalination remain high particularly for agriculture, even without consideration of the likewise costly mitigation of negative environmental effects. Some have identified inland non-seawater desalination as lower cost alternative , yet brine disposal costs at the operation scale needed for irrigation may remain a challenge.

Seawater desalination is mostly used in urban areas of Southern California and the Central Coast, where alternatives are even more expensive. Water trading. California has a small active water market where buyers and sellers trade water . These trades– ranging from 2 to 5% of all water used by cities and farms, reduce the economic costs of shortfalls during droughts and accommodate geographic shifts in water demand, enhancing flexibility in water management . Studies have found that trading could bring significant benefits to agriculture, the environment, and urban users in California . The benefits of an expanded water market grow as water scarcity intensifies, which is likely given the transition to sustainable groundwater use and the reduction in water availability driven by climate change . But a combination of aging infrastructure and complex, conflicting regulatory structures, including volume limits, hinder the expansion of trading . Improving market design, addressing impacts on third parties, securing stakeholder buy-in, and reducing transaction costs are needed to improve California’s water market . Of course, increasing water demand by cities may further drive water from agriculture to cities through water trading agreements . The Mix of Supply- and Demand-Side Options. The combination of supply- and demand-side options will shape the evolution of California’s agriculture. With the expected water availability declines, expanding supplies could mitigate the reduction of California’s agricultural output. But economic pressures constrain supply expansion, as most supply options are too expensive for crop irrigation, which is profitable only if the revenues of the expansion outweigh the opportunity costs . Water trading should incentivize supply expansion, as trading allows water to move to higher profit cropping locations. Federal and state investments can also propel supply expansion. An economic assessment of supply- and demand-side options in the SJV found that around 500 hm3 of supply expansion might be efficiency enhancing—i.e., willingness to pay for supplies is greater than the costs. While 500 hm3 only represents a quarter of the expected decline in water availability, demand reduction will comprise most of the adaptation. Other regions will have different constraints and options. In the Sacramento Valley there will be less water availability declines and more options for groundwater recharge, resulting in less demand reduction. In the Central Coast, high-value crops are more likely to pay for expensive supply options , but even there some demand reductions are likely. In the South Coast, growth of urban demands and the reductions in Colorado water allocations will likely be met by reduced irrigated acreage, although supply expansion partnerships between local farms and urban interests might be feasible . Cropping System Design. For better performance, big plastic pots water stewardship must be accompanied by cropping system adaptations to climate change that reduce water use while regenerating natural resources, maintaining food production, and allowing farms and ranches to build resilience mechanisms. Adapting crop management practices are a main entry point for adaptation through changes in crop location, planting schedules, genotypes, and irrigation . The large range of crops grown in California allows for crop switching based on vulnerability assessments and ecosystem service provision . Management complexities, response to market demand, and downstream infrastructure often make such system adjustments difficult to implement and coordinate at the watershed scale to improve water use and conservation measures. Reallocation of water resources to perennial crops has increased in recent decades with drought-year fallowing of annual cropland.

More comprehensive system-based solutions would create incentives to keep soil covered to provide cobenefits for long-term sustainability with low potential tradeoffs for water use . With climate change, perennial crops are increasingly exposed to year-long stressors that increase their need for irrigation and present growers with less adaptation options to annual variability, such as Relocation and replacing tree species/cultivars . Careful implementation of low-volume irrigation systems is crucial to avoid negative implications on groundwater recharge. Moreover, while subsurface drip irrigation enhances field and plant scale water use efficiency compared to flood irrigation, drip systems can degrade soil health properties important for water infiltration and runoff control, salinity mitigation, and carbon sequestration within the soil profile . While efficiency and technology replacements have a role to play in optimizing water use; they seldom address the ecological, economic, and social drivers of vulnerability Effective adaptation measures must therefore be system based and consider the complex socioecological interactions at play to ensure climate smart outcomes . There is growing evidence that ecosystem-based adaptation options such as cropping system diversification can support adaptation while storing carbon, supporting biodiversity, and securing ecosystem services . This is especially relevant for both California’s organic crop production, and horticultural systems which tend to be more reliant on ecosystem services for pollination and bio-control than field crops. Managing for diversity and flexibility rather than simplification and consolidation enhances adaptive capacity by improving responsiveness to climate changes, lowering vulnerability, and allowing portfolio effects to mitigate impact of disturbances . Diversification using inter-cropping, longer crop rotation, or integrated crop livestock designs have been shown to support water regulation and buffering of temperature extremes as well as other ecosystem benefits which can in turn mediate yield stability and reduce risk of crop loss . Improvements in soil health associated with organic carbon inputs, soil cover, and diversification can mediate groundwater recharge and water and nutrient retention to mitigate yield loss under drought . However, tradeoffs and benefits of ecosystem-based approaches for adaptation and mitigation are context specific, and rigorous assessments of adaptive gains and water footprints are needed. As water scarcity and associated changes in crops and landscape structures unfold, developing approaches that exploit the interconnectedness of diversity at fields, operations, landscapes and food system scales with healthy ecosystems and communities will be critical for sustainable and equitable transitions.Responding to climate change and the accompanying challenges facing agriculture in California is most effectively accomplished with inclusive and innovative approaches involving farm and rural stakeholders and policymakers using information and tools from researchers and advisors. With effective adjustments in response to climate and related water supply and demand concerns, California agriculture can become more economically, socially, and environmentally sustainable in the future. Water is central to that future. Government water management and planning in California has long been institutionally and geographically decentralized. Many local irrigation districts and SGMA groundwater sustainability agencies develop, implement, and maintain plans to weather recurrent droughts and floods. Agencies attempt to facilitate system-wide flexibility in water allocation, which can improve resilience in the case of climate extremes. There is also a role for agencies to improve coordination among stakeholders and facilitate flexibility to allow water to flow where it contributes most to economic, environmental, and social goals. Unfortunately, these broad benefits often are not within the mandate of local agencies. Furthermore, devolution in water management to local agencies rather than to watershed-level governance, creates natural conflicts where one agency’s goals or actions may create conflict and externalities with another nearby agency given water often extends beyond any single agency’s political boundaries.

The authors implanted bare Nitinol stents in non-flexing femoral arteries and flexing femoropopliteal arteries

The study of the microscopic structure of tissues is called histology. In histology, advanced imaging techniques, such as electron microscopy or light microscopy, are used to analyze and identify the tissue and the structures present. In histology, samples can be specially processed and prepared for visualization of the structure and the disease. In, Figure 1.7 histology of the arterial vessel wall shows the intima, the media and the adventitia along with IEL and smooth muscle cells. Generally, there are different techniques in the processing of the tissue for histology. The tissue processing methods include plastic and paraffin histology along with different staining techniques such as hematoxylin and eosin , Movat’s Pentachrome and Elastin Trichrome staining to analyze and visualize the tissue structure. In each process, the tissue is fixed using dehydrated techniques using alcohol. Then the tissue is embedded in either plastic or paraffin resin. Each sample then is sectioned using a grinding method or cutting method using a sharp blade. In plastic histology processing each section can be cut in 19 to 90 microns vs. in paraffin histology the tissue embedded in paraffin, which is similar in density to tissue can be sectioned at anywhere from 3 to 10 microns. Analysis can be performed after staining methods under a microscope. Usually H&E staining can be used to examine cellular type and quantity and fibrin deposition, while Trichrome Elastin staining can be used for observing any type of injury in the lumen of a vessel, garden pots square the media, or EIL, EEL and other structures. At the beginning of this chapter the limitation in the current stents for use in rapid growing children was discussed, and a great interest among pediatric community in stents that can grow with an artery or be resorbed.

Coarctation of the aorta is a congenital disease in children that can potentially benefit from self-growing stents. Stenting as a superior solution compared to balloon angioplasty and surgery for fixation of CoA was discussed. The properties of Nitinol, an alloy that can be used for self-expanding stents due to some of its unique characteristic properties such as super elasticity, and biased stiffness was discussed. Also, the vascular injury and its remodeling process after the injury, such as negative and positive remodeling was reviewed. In addition, histology and several types of processing and staining that are utilized in the science of histology for microscopic evaluation of the tissue structure was mentioned. Currently, there is little information available on the effect of stent radial force on the rapid growing arteries in pediatric patients. However, there are a good number of studies focusing on the adult abdominal stent grafts, coronary and peripheral artery stents, exhibiting the effect of stent and stent grafts, and their radial forces on the vascular biology. Nevertheless, none of the investigators looked extensively at the large growth of small crimp profile bare metal stents and, particularly, did not design a stent that can grow with the small rapid growing arteries for use in the pediatric endovascular applications. An extensive literature search was performed and the findings from a few key sources are summarized in two categories below. Siegenthaler et al., evaluated the growth and the effect of the stent grafts covered with polyester on the thoracic aorta in young piglets. The authors concluded that the stent graft may inhibit growth of the nonatherosclerotic normal aorta and lead to intimal hyperplasia and focal fibrosis in the inner media adjacent to the stent. Siegenthaler et al., proposed several reasons for their finding, including vascular hemodynamics and the change in pressure profile on the arterial wall due to the polyester covers on the stent.

Polyester covers can absorb the most mechanical forces on the arterial lumen, leading the change in the wall stress and less pressure contact and reduce the pulstality exposure of the aortic wall. Another problem with the stent graft is the potential to cover the side branch vessels during deployment, usually the subclavian artery ostium in CoA. In conclusion, the authors suggested that more study should be conducted to evaluate stent and stents grafts in growing aorta. Cheung et al., reported on the early and the intermediate-term follow-up results of Wallstent a self-expandable stent implanted in children with congenital heart disease. The Wallstent has been widely used by interventionalist in Europe for adult patients with the iliac and femoral arterial stenosis. In two different centers, from 1993 to 1997, Cheung et al., implanted Wallstents in 20 children with average age of 10 years old and an average weight of 30.5 kg. The results showed immediate expansion of the stents and reduction of the pressure gradient in the patients. However, the authors observed migration in two of the optimally positioned stents within 24 hours of implantation, along with significant neointimal in growth in 28% of the patient at the mean follow-up duration of 8.1 months, which contrasts with the experience of patients with Palmaz stents where the significant restenosis is at 3%. Cheung et al., suggested the thrombogenicity of the stent could be due to the design of the stent, woven mesh with expanding radial force, versus Palmaz’s rigid slotted tubes with smooth and even surface. The authors also reported the stent did not pace with the growth of the vessel, therefore limits it use in young children. Hong et al., performed an experimental study with CardioCoil , a self-expanding stent in the coronary artery of pigs, for a duration of six months. The authors performed angiographic and histologic analyses to evaluate the deployment characteristics, patency rates, and neointimal response. The neointimal responses in this study were not significant and the stents were patent through the survival period up to 6 months.

The stents expanded over time; the diameter of the stents at the time of implant was 2.85 ± 0.78 and at the follow-up showed to be 3.24 ± 097mm. Hong et al., observed penetration of most of the stent’s struts into the adventitia. The authors concluded that the self-expanding stent is related with favorable deployment characteristics and potency rates, although appropriate sizing is more crucial than with balloon-expandable stents. More importantly, Hong et al., concluded that, unlike balloon-expandable stents, there is a dissociation between major vessel injury by the chronic strut expansion process and the neointimal reaction. Freeman et al., explored the effect of the stent forces in vascular stenosis and remodeling by placing stainless steel stents with three chronic outward forces —3.4N, 16.4N and 19.4N—in the iliac arteries of juvenile porcine models for a duration of 30 days to explore and develop an equation for identifying the optimal stent force. The results of the authors’ investigation revealed a significant increase in the total thickness and neointimal hyperplasia in the stents with higher COF than the lower ones, which corresponds with several other similar findings. Freeman et al., concluded that the geometry, structure, and mechanics of the target vessel need to be considered when a stent is designed and, in order to achieve maximum dilation, stents should not produce stress in the vessel wall greater than the end of the transitional domain of the vessel’s stress-strain curve. The authors suggested that their findings could be extremely useful in the vascular stent developments. In a 180-day study, Zhao et al., explored late stent expansion and neointimal proliferation of over-expanded Nitinol stents in the peripheral arteries. The authors used Nitinol selfexpanding stents with a maximum diameter of 8 mm and length of 28mm. Zhao et al., implanted the stents into the iliofemoral arteries of Yucatan swine. Due to variations in target artery size, the stent-to-artery ration ranged from 1.2:1 to 1.9:1 and the effect of stretching investigated. the authors observed high stent diameter-to-artery ratio, which resulted in overstretching of the arterial wall. Finally, Zhao et al., reported that the overstretching of an artery can lead to medial injury, square pots and medial injury will cause a profound long-term histological response, including significant neointimal proliferation. Saguner et al., found that stents constrained by their target artery at implantation expanded over time to near their nominal diameter within five months. Like the previous study, severe oversizing determined as an oversizing ratio resulted in significant neointimal proliferation and in-stent restenosis. Barth et al., performed a side-by side comparison of three current stents in the market that are substantially different in their physical characteristics: Palmaz stent, Strecker stent and Wallstents. Palmaz is the most rigid stent and has a very high resistive outward force in vitro in comparison to the Wallstent. The Strecker is made of tantalum, has the lowest resistive force of the three, and is very flexible and maneuverable. Among the three, the Palmaz stent is the nonelastic one with a lower profile, the Wallstent is fully elastic with a higher profile, and the Strecker stent is elastic to a lesser degree with a higher profile. All stents were implanted into canine external iliac and the flexing portion of the proximal femoral artery of dogs. Angiographic images of mid-stent luminal diameters instantly after placement of the stent and at follow-up, as well as mid-stent cross-sectional areas of neointima were compared by the investigators for significant differences. Barth et al., concluded that the Strecker stent with a high profile and low resistive force is affected by the vascular wall recoil and caused the formation of a greater amount of neointima in comparison to the lower profile high resistive force Palmaz stent and Wallstent. Medial atrophy is pronounced outside the latter two stents. The authors found that in the flexing arteries, the rigid stent can penetrate through the vascular wall. Sakakoa et al., studied the vascular response of bare Nitinol stent in porcine femoral and femoropopliteal arteries. The authors performed quantitative angiography and histopathology at one and three months to evaluate and assess the biological response to the two devices. Sakakoa et al., observed an increase in the neointimal area in FPA in comparison to FA and late lumen loss in FPA than in FA. The authors concluded that repetitive interaction between the stent and the vessel wall during dynamic vessel motion could affect vascular responses. Several clinical studies reported the use of different types of stents for fixation of the CoA. We reviewed a few of them and some of them are summarized here.

Haji-Zeinali et al., used currently in the market self-expandable Nitinol aortic stents in eight hypertensive patients with coarctation of the aorta.48 The authors showed that after implantation of the stents, the mean systolic gradient decreased significantly. Haji-Zeinali et al., also reported that Nitinol stents were easier to deploy and conformed better to the aortic anatomy compared to balloon expandable stents. Finally, the authors found that Nitinol stents could be used to treat the coarctation of the aorta safely and effectively; these types of stents had similar efficacy in reducing coarctation of the aorta as surgical repair. Although Haji-Zeinali used these stents in adult patients, we believe the application of Nitinol self-expanding stents can be extended to the pediatric applications and especially neonatal applications for the reduction of CoA. Bugeja et al., used a stent in neonatal for fixation of the coarctation of the aorta. They reported a case of a severely-ill newborn with complex coarctation, multiorgan failure, disseminated intravascular coagulation and oedema, who had to go through an emergency stenting procedure on the tenth day of her life. Since there are no designed stents for neonates, the authors used an off-label used bare metal adult coronary stent . With a fast pace of growth in the neonates, Bugeja et al., placed the stent temporarily and planned a surgical procedure to remove the stent and fix the coarctation surgically. This study clearly demonstrated the need for a stent that can be placed in patients and grow with them to eliminate or reduce the future interventions. Prior to designing the stent for this investigation, the most used stents in the congenital heart disease field was reviewed. Stents can be categorized based on their delivery method: balloon-expandable or self-expandable stents.2 Balloon expandable stents are inflated with a balloon and their size is determined by the diameter of the balloon that they are inflated with. These stents are mostly rigid with high external outward force.

The grower-only group tends to use more SUMMARY AND IMPLICATIONS diverse marketing channels

Farmers who wish to remain eligible for some USDA program benefits must obtain catastrophic insurance or higher levels of coverage. Given the relatively few government programs available for specialty crop growers, this ranking may be associated with the specialty crop growers who have diversified into field crops. However, it is worth mentioning that not even one-quarter of potential respondents provided the rank for the reason for purchasing crop insurance except for “crop loss,” which was chosen by more than three-quarters of the insurance buyers. This indicated that many felt that any reason other than crop loss was remotely related.Reasons for not purchasing crop insurance and their mean ranking are presented in Figure F2. “Never lost enough production” and “premium is too high” ranked highest among the choices offered except “other.” This reflected the relatively low degree of yield variability in many specialty crops grown in California. “Lack of availability for my crop” was next. Particularly among vegetable growers, lack of availability was ranked as the primary reason for not purchasing crop insurance, with a mean rank of 1.6 . Further, “major source of risk is not an insured cause of loss” and “do not understand the program” were not trivial. Finally, for almost all crop categories, “other” ranked as the primary reason for not insuring. This may imply that there is substantial “catch up” to be done for both growers and insurance providers—that more efforts are needed to inform growers about crop insurance and for authorities to learn the unique reasons why growers of particular crops do not purchase insurance. Table F3 provides the average ranking of suggestions to improve crop insurance. Suggestions listed were mostly related to compensation schemes.

For fruit/nut and vegetable farmers, “raising the yield guarantee,” “compensating for revenue or profit,” and “guaranteeing cash production costs” ranked high, grow bag for tomato while for ornamental growers, “compensating for revenue or profit” and “guaranteeing placement costs of an inventory” ranked high. For fruit/nut farmers, guaranteeing the cost of establishing an orchard was not as preferred as compensation of cash production costs, and a compensation scheme for ornamentals needs to be devised to accommodate their production systems because traditional yield-based production is not relevant to them. Overall, it was clear that specialty crop growers were more concerned with revenue and profit variability than they were with yield variability. This attitude is common among farmers in California’s irrigated agricultural industry. Recent research on crop insurance has consistently identified some level of demand, but that demand has been influenced by numerous factors . A decade ago, research focused primarily on yield risk as the key determinant of demand for crop insurance. Studies of that period focusing on specialty crops found that growers’ reluctance to insure was based on the fact that price variance was often more significant than yield variance . This prompted the first assessments of revenue insurance as an alternative . In recent years, revenue insurance has received wide attention. However, the few studies of specialty crop producers’ demand for revenue insurance have shown a need for more detailed, crop specific analyses of market and grower factors .The final section of analysis focuses on four financial variables: off-farm income share, gross agricultural sales, assets, and debts . Previous research has shown that these factors have a significant influence on farmers’ risk attitudes and, thus, on their risk management practices. For example, off-farm income supports most farms in the United States . The cushion from off-farm income makes many of those farms less sensitive to income risk , thus decreasing the demand for risk management tools .

In other words, off-farm income substitutes for other risk management tools to some extent. Figure G1 presents the distribution and mean of off farm income shares by crop category. The “share” refers to the percentage of total household income that comes from off-farm sources. The mean share for the entire survey was 63 percent . In general, there seemed to be a common pattern in the distribution for each crop category. Each distribution showed relatively heavy densities at the 1 to 10 percent range and then in the mid-range at 41 to 50 percent. The density started to increase at the 71 to 80 percent range. Note that the 91 to 100 percent range showed the highest density among all ranges for both fruits/nuts and ornamentals . However, the distribution of farms in the vegetable category deviated from the other two categories. The distribution of vegetable farmers showed greater density in the ranges with relatively low off-farm income shares, indicating that vegetable growers tend to spend less time on off-farm activities and get more of their income from farming than do fruit/nut or ornamental growers. Table G1 provides average values of gross agricultural sales, assets, and debts. Along with mean dollar figures, the table also reports the standard deviations in parentheses. There were substantial differences across crop categories. Consistent with the earlier findings on mean acreage, vegetable growers’ mean gross sales were much higher than those of other categories—nearly three times that of fruits/nuts and one and a half times that of ornamentals. The standard deviations of the mean gross sales were relatively large, indicating substantial variation in sales figures across farms. Nevertheless, judging from the values of the coefficients of variation, it was possible to infer that the variation in gross sales was less severe for vegetable farms. Vegetable operations also had the highest mean values for assets and debts. 

The reported mean values of assets and debts gave debt/asset ratios of 0.42 for fruits/ nuts and 0.50 for vegetables. More importantly, when viewing assets and debts as financial inputs necessary to generate revenue, the ratio of gross sales revenue to the sum of assets and debts was highest for vegetables and lowest for fruits/nuts. This implies that one unit of financial inputs is associated with a higher level of revenue for vegetables than for fruits/ nuts, or equivalently, one unit of revenue is associated with a lower level of financial inputs for vegetables than for fruits/nuts. This cursory observation may be linked to the relatively high intensiveness of financial inputs required, or the relatively low performance of financial inputs in fruit/nut production. The mean gross sales by region varied substantially. Gross sales data by crop category and by region indicated that the lowest gross sales were in the Far North region for both the fruit/nut and the vegetable categories, as expected because of those region’s lack of suitability for such crops . The highest mean sales for the fruit/nut category were the Central Coast – North’s $0.6 million ; for the vegetable category, the highest mean sales were the Sacramento Valley’s $1.8 million. Figure G2 provides the distribution of gross agricultural sales by crop category. The median and mean gross sales diverged considerably; the median was only about one-tenth of the mean value due to inclusion of some extremely high sales values for a few very large scale operations combined with the large number of small-scale farms. In the vegetable category, there were relatively higher proportions of farmers in higher sales ranges. The proportions of farmers with more than $1 million in sales were 6 percent for fruits/nuts, 29 percent for vegetables, and 13 percent for ornamentals. Figures G3 and G4 provide the mean gross sales by off-farm income share and by acreage class, respectively. Mean gross agricultural sales were negatively correlated Figure G2. Distribution of Gross Agricultural Sales Fruits and Nuts 5,001 and greater 35% 15% 10% 5% 0% 0–10 11–50 51–100 101–500 501–1,000 1,001– 2,000 2,001– 5,000 14% 17% 5% 26% 3% 2% 1% 25% 30% 20% 33% with off-farm income share and positively correlated with acreage, grow bag for blueberry plants confirming our expectation that higher agricultural revenues were generated by farms with larger acreage and farmers with less off-farm work. However, when sales revenue was computed as per-acre revenue, Figure G4 suggests that revenue per acre decreases as acreage increases. This is not counter-intuitive, given that specialty crops vary widely in unit value and the survey results indicated that smaller sized farms were, in general, associated with higher crop values.The main purpose of this report was to provide detailed and unique survey-based information on the fruit/nut, vegetable, and ornamental crop industries of California. The main findings from these survey data are as follows. California has fewer vegetable farms but, measured by gross sales and other dimensions, they are larger operations than fruit/nut farms are. Diversification increases with farm size, measured by acres. Fruit/nut farms are, on average, less diversified than vegetable farms, and when fruit/nut farmers diversify, they tend to use similar crops. About 6 percent of fruit/nut and vegetable farms have some organic land. These organic farmers represent 6 percent of fruit/nut farms, 14 percent of vegetable farms, and 4 percent of ornamental crop farms. Many of these farms also engage in conventional farming, and they devote, on average, about one-third of their primary crop land to organic farming. California farms tend to grow produce for either processing or fresh use but not for both. About 71 percent of the sampled fruit/nut farms produced mainly for processing use. About 67 percent of sampled vegetable farms produced mainly for fresh use. Contracts play a major role in marketing for specialty/ horticultural crops. They are particularly important in markets for crops designated for processing. Nearly 60 percent of fruit/nut farmers and 90 percent of vegetable farmers marketed their processing commodities through contract arrangements. The majority of these contracts provided for a predetermined price.

About 13 percent of vegetable farms but only 3 percent of orchard farms are grower/shippers. These farms tend to be larger than average and supply to mass merchandisers. Among the various channels, “directly to consumers” was used by the largest share of farms , but the farms tended to be smaller than average. Yield variability is an important risk factor for growers. Orchard and vineyard crop yields tend to fluctuate more than vegetable yields. Orchard and vineyard crop yields deviated an average of 15 percent for the five-year moving-average yield, compared to an average of 8 percent for vegetable crop yields. Despite considerable yield variation from year to year for these California crops, price variability is listed by growers as the most important risk source. Growers list price declines due to industry-wide overproduction as the number one concern. Growers use diversification and some marketing channels to manage risk. Crop insurance is less available for vegetable crops than it is for fruit, vine, and nut crops. Vegetable producers view crop insurance as a “less preferred” risk management tool. When asked about crop insurance programs, many farmers suggested that a “higher yield guarantee” would improve crop insurance. Further, most farmers strongly suggested the need for crop insurance that compensates in value terms, but they expressed no strong preference among compensations based on gross sales, profit, or production costs.The information provided in this study and the data set that underlies it will prove useful to agricultural business firms, including individual farms, as well as to government policy advisors and program designers. The study results provide a benchmark to industries that allows them to compare operations to the averages and medians for specific crops or locations. It also allows agricultural marketing and other service and supply firms to better understand their own potential supply and customer base for planning and product development. Such detailed data have not been available previously. The data are being used in risk management education efforts for growers and in summary form to provide objective data about grower operations and attitudes.The data and results also have implications for public policy and implementation of public policy, especially relative to risk management. Some examples are provided here. We find that many growers use crop diversification to smooth their revenue streams, but some growers find diversification more difficult or costly. Even if more diversified farms tend to have less variability in farm income, the degree and form of diversification affects the probability and magnitude of losses. The importance of diversification and its variation across specific industries points to the conditions under which yield insurance may be of interest and where it is less important to a farm’s annual revenue and thus less appealing as a risk management tool. The covariance between price and individual farm yield is another crucial piece of information in assessing farm revenue risk related to either price or yield variability. USDA’s Risk Management Agency has been developing whole-farm revenue insurance products.

Labor is also a current and significant challenge for growers of berry crops

Since 1990, UCCE researchers have used a farm budget software program to analyze the data and present results in several formats detailing costs for cultural and harvest practices, monthly cash costs and business and investment overhead costs. The studies also include an analysis estimating net returns to growers for several yield and price scenarios. Representative costs for food safety and environmental quality programs have been incorporated into more recent studies as they have evolved to become standard business practices. The resulting production and economic information is specifically designed to assist growers, bankers, researchers and government agencies with business and policy decisions. The first economic analysis of fresh market strawberry production for Santa Cruz and Monterey counties was performed in 1969; at least one subsequent analysis has been conducted every decade since then. Though the level of detail and data included in each study has changed over time, some interesting trends can be noted. Annual land rent climbed from $150 per acre in 1969 to $2,700 in 2014, representing 2.5% and 5.5% of total production costs, respectively. The cost of soil fumigation for conventional strawberry production increased from $350 per acre in 1969 to $3,302 in 2010, representing 5.5% and 6.9% of total production costs, respectively. Production year water use gradually decreased from 80 acre-inches per acre in 1969 to 36 acre-inches by 1996 as drip irrigation became the standard. The amount of water used to bring a crop to harvest has remained roughly the same since that time; however, growers and researchers continue to investigate methods to increase water use efficiency even further. In some areas, grow bag soil types and fields, growers have been able to reduce per acre water use by several acre-inches more .

When the above costs and water usage are assessed on a per ton rather than a per acre basis, production practice cost increases are less notable, and water savings even greater. Labor-intensive practices such as hand weeding and harvest are consistently shown as costly line items relative to other operations. Representative yields for conventionally produced fresh market strawberries rose from 20 tons per acre in the 1969 study to 30 tons in 2010, an increase of 50%. Even higher yields are discussed for some varieties and production conditions; county production statistics confirm that higher yields are indeed possible . Representative yields for organic strawberries, studied over a much shorter time period, rose from 15 tons per acre in 2006 to 17 tons in 2014, an increase of 13%. As more research is directed towards organic agriculture in general and strawberries in particular, yields will likely increase even more with time. Recent efforts include improvements in cultivar breeding, cultural practices and disease management, especially soil pathogen management. The most recent economic analyses for conventional, second year conventional and organic strawberry production were performed in 2010, 2011 and 2014, respectively. Second year conventional strawberries, or those producing a crop for a second year after having produced the first without replanting, represent about 15% of the total strawberry acreage in the area. Similarities and differences in total, cultural and pest management costs for the three management approaches are shown in figures 1 to 3. Total costs for conventional strawberries were $47,882 per acre and include expenses for all practices from land preparation to harvest . For the second year conventional strawberry crop, total costs were lower at $32,798 per acre, reflecting a reduction in expenditures for land preparation and reduced harvest costs because of lower yield. For organic strawberries, total costs were $49,044 per acre, slightly higher than for conventional production, mostly due to higher soil fertility input costs.

Harvest, a labor-intensive practice, clearly represents the lion’s share of total costs, at 58% in organic production, 60% in conventional production and 67% in second year conventional berries. Cultural costs represent 26% of total costs in the conventional and organic systems, but only 15% for second year strawberries because there were no associated planting costs, and because pest management costs were lower . Looking more closely at pest management, soil fumigation is the highest cost category for conventional production at $3,302 per acre, with weed control, another labor-intensive practice, the highest cost in second year and organic strawberries at $1,212 and $2,506 per acre, respectively . However, for organic strawberries the cost to control insects ran a close second at $2,488 per acre, which was dominated by control for lygus bug with a bug vacuum, and two-spotted spider mite with the release of predatory mites. By comparison, estimated costs for insect control in conventional strawberries were lower at $702 per acre and still lower at $579 in second year conventional berries. Raspberry and blackberry production were not routinely studied in years prior to 2003. Since then, several primocane-bearing raspberry and floricane-bearing blackberry cost and return analyses have been performed, with the most recent studies conducted in 2012 and 2013, respectively. Both studies detail establishment and first year production and harvest costs for not-yet-fully-mature crops. For raspberries, first year of production includes a $12,460 per acre construction, management and investment cost for protective tunnels. Costs for a mature raspberry crop are analyzed in the second production year and total $48,210 per acre . For blackberries, costs for a mature crop are shown for the second through fifth production years, and total $43,406 per acre per year. Harvest costs again represent the vast majority of total costs, at 81% and 71% of total costs for raspberries and blackberries, respectively.

For raspberries, cultural costs represented a much smaller share of total costs at $4,656 per acre, roughly half of which was for trellis and tunnel management. Blackberry cultural costs totaled $5,709 per acre, of which over half was for pruning and training canes. Each study also includes an analysis of potential net returns to growers above operating, cash and total costs for a range of yields and prices. When evaluating net returns above total costs, gains are shown for higher yield and price points; losses are also documented at many lower yields and prices . Farms with productive soils, experienced managers, optimal production conditions and robust market plans generally realize higher net returns. In contrast, farms with less-than-optimal production conditions, reduced yields, poor fruit quality or inexperienced managers may contribute to lower net returns. Results from the strawberry analyses show that on a per acre basis, organic strawberries tend to be more profitable than conventional berries, even with lower yields. Organic price premiums explain the result; in this example price per tray for organic strawberries ranged from $12 to $18, while price per tray for conventional berries ranged from $7.30 to $11.30. Prices for second year conventional strawberries were slightly lower still to account for a portion of the crop that was diverted to the freezer market. Net returns for both caneberries were mostly positive. Other noteworthy entries in all recent berry studies include per acre costs for pest control advisers , management of invasive pests and food safety and regulatory programs for water and air quality. Though each alone represents a relatively small portion of total costs, they provide readers with insights into the changing nature of berry production activities and costs over time.Cultural practices in the berry industry have evolved to address changes in soil, water and pest management needs. New varieties have been developed to enhance yield and quality attributes. Based on historical trends, and to meet both industry needs and consumer demands, grow bag gardening we expect to see new varieties continually developed over time. Businesses have responded to consumer and market demands for fresh, safe and organic products by implementing food safety programs and/or transitioning more lands to organic production. Water and air quality programs have been developed to comply with state regulatory requirements. In the past, growers customarily hired those with expertise in financial and market management; they now also enlist the support of experts in food safety, organic agriculture and environmental quality to assist with farm management. But challenges remain, and management of key agricultural risks — including those for production, finances, marketing, legal and human resources — have become increasingly important. Invasive pests pose significant management and regulatory constraints and increase production, financial and market risks. Two recent examples are light brown apple moth and spotted wing drosophila . LBAM infestations can lead to loss of part or all of the crop because of field closure from regulatory actions, increasing production and financial risk. SWD presents substantial market risk to growers in that its larvae can infest fruit and render the crop unsaleable. Growers minimize the risk of loss from these two organisms with the routine use of PCAs. PCAs monitor fields more frequently than growers alone would be able to do, identify pests and recommend actions, for example, the use of pheromone mating disruption for LBAM and field sanitation for SWD. Since their introduction, the soil fumigants CP and MB have unquestionably contributed to the expansion of the berry industry.

However, the full phaseout of MB as a pest management tool — it will no longer be available for use in berry production after 2016 — presents both production and financial risks. While a substantial research commitment has been made to finding alternatives to MB, nothing has yet come close to offering the same level of protection from the large-scale loss to soil pathogens or the gains in productivity associated with the application of CP and MB as synergistic preplant fumigants. We anticipate that the berry industry will adapt to the MB phaseout by using alternative fumigants and preplant soil treatments, but these are likely to carry a higher level of risk for berry production in the short term and may lead to a decrease in planted acreage and production. However, this may also stimulate an even more robust research agenda directed towards soilborne diseases and plant health to minimize disruption to the industry. Reliance on fumigants as the primary strategy for pest management is almost certainly a thing of the past. Instead, adoption of integrated approaches, including alternatives to fumigants, to manage diseases, weeds and other pests will be key to sustaining berry production over the longer term . Social and demographic changes in Mexico — the source of a majority of the area’s agricultural labor — have resulted in markedly lower immigration rates into the United States, a shrinking labor pool and upward competition and wage pressures for the agricultural workers who remain . In recent years, growers have reported difficulty in securing and retaining sufficient numbers of workers to ensure timely and effective farm operations. The lower production figures seen in strawberries in 2014 may in part have been the result of an insufficient labor pool from which to draw . However, no known regional employment or wage data are available to specifically document this. Some growers minimize labor risk by paying higher wages and providing year-round employment when possible. However, these strategies can be difficult for some businesses to justify economically. Arguably, the area’s berry industry, and agriculture more generally, increasingly face political risk. Immigration legislation that may assist with the current labor challenge languishes at the federal level, with major policy changes unlikely before 2017 . Farming practices are under ever more scrutiny by consumers, local municipalities and state and federal agencies. Soil fumigants and pesticide use have been the focus of many intense debates and discussions, especially in Santa Cruz and Monterey counties. At the time of this writing, several new regulations related to pesticide application notifications, pesticide and fumigant application buffer zones and worker safety have been proposed by the California Department of Pesticide Regulation or the U.S. Environmental Protection Agency but have not yet been finalized. It is anticipated that implementation will begin in 2017, with full compliance required in 2018. And, as California struggles through a fifth year of drought, water use, quality and cost has become a more robust part of the local, state and federal discourse, with directives issued and new legislation proposed. Compliance with each new directive or regulation presents production and logistical challenges for growers and can be costly to manage. Although it is unlikely that regulatory pressures will lessen in the future, there is every expectation that growers will continue to adjust business practices to meet or exceed any new requirements or standards.

Many of the same people working on projects in Berkeley later became involved in Oakland

In East Oakland’s Fruitvale District, the Coalition for Healthy Communities and Environmental Justice joined with PUEBLO, the Center for Environmental Health, and Greenaction, and triumphed in 2001 after a four-year battle to shut down the Integrated Environmental Systems medical waste incinerator which had been polluting since the early 1980s . While the movement against these industries was sometimes fractious due to disagreements over potential job loss, the coalitions were ultimately strong Unlike the Black Panther Party’s Food Program, nothing about the EJ movement spoke directly to the issue of food access. What the EJ movement did provide, however, was training in the trenches for a generation of activists. It mobilized community members to act; victories cultivated a sense of empowerment and reclaimed a political voice that had been silenced by decades of flatlands devaluation, while failures underscored the importance of ongoing resistance. The EJ movement also drew attention to the flatlands and to the injustices that have produced them as a social and ecological space. Importantly, the movement fostered and galvanized alliances between policy and research intermediaries and community-based organizations and neighborhood residents. Alliances such as these would be central to the success of the urban agriculture and food justice movement that was slowly beginning to coalesce in the flatlands at the same time. A pivotal moment connecting EJ to what would become the food justice movement occurred around the same time and involved a theoretical shift in the way that struggles over race, poverty, and environment were framed.

A new “spatial justice” framework helped to highlight the interrelations between racial and economic segregation, built environment, square black flower bucket and access to entitlements such as healthy food, clean air and water, and open space. This new theoretical framing was forged in large part through the efforts of Carl Anthony and Karl Linn. By the early 1990s Anthony had become a prominent voice in the Bay Area EJ movement. Like other EJ activists, he attempted to shift the attention of the mainstream environmental movement towards urban areas, and fought to overcome what he termed the “apartheid of consciousness”—the belief that social and environmental issues were somehow distinct—keeping inner-city people of color and white suburban environmentalists from joining forces to tackle environmental issues. Studying architecture at Columbia University in the 1960s while working as a civil rights activist on the side, Anthony began to think about the relationships between social justice and the built environment. He later became involved in the “community design” and “advocacy planning” movements, both of which emphasized moving the process of urban planning and design out of the hands of technocrats and into those of lowincome communities . In the late 1980s, Carl Anthony reconnected with Karl Linn, a landscape architect who had led a long and productive life as a farmer, psychologist, landscape architect, and educator on three continent. The two were old friends, having met in North Philadelphia in the early 1960s when Linn was teaching landscape architecture at the University of Pennsylvania. Through his “community design-and-build service education program” Linn and his students worked with community members in ramshackle neighborhoods and vacant lots throughout the city. He was later instrumental in the community gardening movement of the 1970s and was a founding member of the American Community Gardening Association . Anthony credits Linn with giving him “some sense that you could actually put together a social agenda and an environmental design agenda” . When Linn moved to Berkeley in 1986 upon his retirement, the two joined forces to expand awareness within the white environmental world of the issues of social, racial, and economic justice that were at the forefront of concern for people of color.

The underlying structural conditions of the flatlands—the demarcated devaluation I described in the previous chapter—proved fertile ground in which a productive synthesis of the theories and activism of the two men could take root. Until this point, environmental groups, many of them located in the Bay Area, focused primarily on struggles to conserve wilderness areas at all costs, often conflating subsistence resource use by indigenous peoples with large-scale capitalist resource extraction. Linn urged Anthony to connect with David Brower and other white environmentalists, some of whom were supporting social justice struggles in the Global South. On Linn’s urging Anthony joined the board of Brower’s Earth Island Institute, provided that that he “could create a program that would really address the environmental issues from the perspective of social justice” . In a 2003 oral history, Anthony remembers, “What we found was that every environmental issue was also a social justice issue. As we began to get into it, we could see the connections … We had to have more of a sense that these issues have to be together” . In 1989 Urban Habitat was born. Most of the justice-oriented urban agriculture efforts cropping up at the time were concentrated in flatlands of southwest Berkeley just across the city limits from Oakland. The majority of these projects pushed the boundaries of conventional community gardening by emphasizing youth employment and food security. These efforts, which predominantly employed young African Americans, helped to increase the involvement of people of color in urban agriculture. Shyaam Shabaka, a PCGN member and co-founder of EBUG, and Melody Ermachild Chavis, a white neighborhood activist, founded Strong Roots in 1994. Shabaka had spent time working on a horticulture project in Mali and hoped to reconnect African Americans with “the lost agricultural heritage that’s rightfully ours” . The Strong Roots motto was “Gardening for Survival” and employed fourteen youth at six gardens throughout Berkeley, including at a vacant lot at the corner of Sacramento Ave. and Woolsey St. that was home to drug deals and drive-by shootings. Funding came in part from the federal Summer Youth Employment and Training Program before it was axed by the 1995 budget under Newt Gingrich’s Contract for America. Other funding came from a federal substance abuse prevention program . A host of similar programs cropped up at the same time, focusing on youth employment and training.

Berkeley Youth Alternatives Director Niculia Williams and UC Berkeley Landscape Architecture student Laura Lawson started the BYA Garden Patch as an alternative to the fast food breakfasts that most of the children attending BYA’s programs were eating. In 1994 the garden was established with the labor of community members, AmeriCorps and East Bay Conservation Corps volunteers, and UC Berkeley students. Through the ‘90s it grew to include community garden plots and a Youth Market Garden that provides youth with employment and on-the-job training and the organization with revenue. By 1998 the Youth Market Garden had earned more than $10,000 in sales. Cut flower sales added to revenue, as did a twenty-five member sliding scale CSA . In 1993, the same year as the BYA Garden Patch was planned, Spiral Gardens was created “by a handful of individuals dedicated to urban greening, innovative organic farming methods, food security, and environmental justice issues” on Sacramento Avenue in South Berkeley, across the street from the Strong Roots garden . A project of the Agape Foundation for Nonviolent Social Change, the organization grew vegetables, herbs, and native plants for sale, in addition to offering community gardening plots and horticulture workshops. One of the founders, Daniel Miller, also ran the Urban Gardening Institute, a garden based job training and microenterprise program for people enrolled in a drug rehabilitation program and transitioning from homelessness. The program was run through Building Opportunities through Self-Sufficiency at several homeless shelters, residential hotels, and community gardens. The two programs merged in 1997 and in 2004 became a 501 nonprofit called the Spiral Gardens Community Food Security Project . Berkeley’s justice-oriented urban agriculture activists also gained inspiration and material support from a growing national movement that brought together anti-hunger, square black flower bucket wholesale sustainable agriculture, farm labor, environmental, and health and nutrition activists . In the summer of 1994, the Community Food Security Coalition formed and drafted their equity-based vision for integration into the Farm Bill. While most of their recommendations failed under a Republican-controlled Congress, the 1996 Farm Bill included a provision to provide annual funding for projects that would “meet the needs of low-income people, increase the self-reliance of communities in providing for their own needs; and promote comprehensive responses to local food, farm, and nutrition issues” . These Community Food Project Grants would play a role in the East Bay over the next decade, some destined for school gardens, others to developing local community food security gardens. Alliances with CFSC activists also helped to galvanize the fledgling justice oriented urban agriculture movement by linking activists in the East Bay to a larger national network that shared ideas, information, and other resources through newsletters, conferences, working papers, small grants, and email list-serves, once again opening up new spaces of engagement to defend spaces of dependence, first in Berkeley’s flatlands and later in Oakland. Berkeley essentially served as a hub of urban agriculture innovation, attracting activists and organizations that were, in turn, able to marshal public and private funding necessary to sustain the equity-oriented urban agriculture activity.

Indeed, the food security and youth employment projects in South Berkeley were mere blocks from the boundary of North Oakland. Many of the young activists involved in urban agriculture at the time actually lived in Oakland where rent was cheaper. One former activist working in one of the South Berkeley gardens blames changes in rent control in Berkeley for his move to West Oakland in the mid ‘90s; in 1995 the passage of a state law, AB 1164, allowed landlords in Berkeley to raise rents when units became vacant and many young activists were simply priced out of Berkeley.By the early 1990s several school gardens had sprouted up. A few of these were in Oakland, but like the community gardens, the nexus of school gardening activity in the Bay Area was in Berkeley. Ground was broken at Willard Middle and LeConte and Malcolm X Elementary Schools. These new gardens were by no means the first in Berkeley’s history. A 1918 history of Berkeley’s public schools dedicates a short chapter to the school gardens that were used to “provid[e] vital contact with the facts and forces of nature” and “to teach children order, industry, respect for labor, and thrift, besides a love and sympathy for the wonderful and beautiful” . 82 While the emphasis three-quarters of a century later was perhaps less about industry, labor, and thrift, fostering a love for nature was surely still on the agenda. Perhaps new to the garden-based curriculum was an emphasis on nutrition. In March 1997, another CUESA conference helped to galvanize the importance of urban gardens in the East Bay as well as draw national attention—and funding—to the area’s fledgling school garden initiatives. Like the previous conference that helped bring an emphasis on social justice into the urban agriculture discourse, this event helped to emphasize the linkages between urban agriculture and nutrition. Held at MLK Middle School, “A Garden in Every School: Cultivating a Sense of Season and Place” was intended to cultivate a vision of fresh and nutritious food for all school children, and brought school system officials, teachers, planners, and gardeners under the same roof. CUESA Director Sibella Kraus recalls, “The thinking was that a high end farmers market in San Francisco is making a difference to some people, but not to others…. We thought we’d maybe get thirty people or fifty, but we got 900 people! It completely sold out. People were just really ready for it to happen” . The event coincided with the establishment of the Edible Schoolyard at the school. Founded by Chez Panisse’s owner Alice Waters, the Edible Schoolyard incorporates garden- and cooking-based education, connecting fresh food to healthy lunches. The program has been widely lauded and replicated nationally, and has become a model for revamping the school food system. The parents of school children were also central to the expansion of school gardens, and urban agriculture more broadly. Beebo Turman, a pre-school teacher, parent, and backyard gardener met with Alice Waters and “six or eight other parents” at a Parent-Teacher Association meeting at King Middle School in 1993 and began organizing, writing grants, and fundraising to get the Edible Schoolyard up and running.

Rural migrants often discover on arrival in urban centers that prospects for employment are slim

In an often-cited example, the expansion of capitalist agriculture in Europe and North America led to a soil fertility crisis during the 19th century. A mad dash for new sources of fertility ensued, notably for South American guano and saltpeter, and a nascent synthetic fertilizer production industry developed. The scramble to locate new sources of fertility drove imperialist expansionism which ultimately displaced the metabolic rift elsewhere . As Engels explained in the late 19th century, each technological triumph over nature leads to other crises: “For each such victory takes its revenge on us. Each victory, it is true, in the first place brings about the results we expected, but in the second and third places it has quite different, unforeseen effects which only too often cancel the first” . These short-term technological fixes inevitably generate new metabolic rifts, amounting to “a shell game with the environmental problems [capitalism] generates, moving them around rather than addressing the root causes” . However, this shell game is not just a matter of space, but also a matter of scale. While a rift in a particular metabolic process occurs at a particular scale, social metabolism of nature continues at new spatial and temporal scales as production is relocated or becomes dependent on new inputs. Capitalist rationalization of agriculture arose from the pursuit of new markets and from the need to avert crises of production, such as falling rates of profit due to competition, a decline in availability of raw materials, flower buckets wholesale or environmental pollution and declining worker health resulting from production practices . These shifts in production severed particular metabolic interactions.

The separation of animal and crop production in industrial farming systems, for example, ruptured cycling of nutrients at the farm scale, leading to an increased reliance on off-farm inputs, such as fertilizers and feed shipped in from other regions. This rift in nutrient cycling therefore resulted in a rescaling of social metabolism; put simply, the inputs necessary to sustain human life under this new production system came from farther and farther away. Sustaining social metabolism under a food production system that depletes rather than regenerates the resource base depends not only on such spatial rescaling, but also on temporal rescaling. Rescaling requires what ecologists refer to as spatial and temporal “subsidies” to the food web , inputs that are produced on a different geographic and/or time scales. Since a subsidy is cross-scalar , its incorporation into a metabolic system inherently creates a new ecological rift as it is depleted; it is impossible to close the loop between the source and sink of such a cross-scalar subsidy. During the aforementioned crisis in soil fertility, for example, guano and nitrates were mined from decades- and centuries-old deposits from Peru and Chile, then transported across oceans to Europe and America . Replenishing these stocks would have been impossible within the span of a single cropping season, much less within the span of a human life. Once guano stocks were exhausted, agribusiness interests turned to synthetic fertilizers. The natural gas and petroleum needed to produce synthetic fertilizer and power tractors is millions of years-old, drawn from gas fields and oil wells around the globe and shipped to factories and refineries before being used thousands of miles from the point of extraction.

It becomes easy to see how ecological rift scales up, making social metabolism a global affair, dependent on millions-year-old subsidies from tens of thousands of miles away. If, as Huber argues, fossil fuel use is “an internal and necessary basis to the capitalist mode of production,” ecological rift and the resulting spatiotemporal rescaling of social metabolism is internal and integral to the contemporary agri-food system. Relocalizing these nutrient cycles and reducing dependence on petroleum-based food production lie at the heart of urban agriculture’s potential to mitigate metabolic rift. British agronomist Sir Albert Howard , concerned that organic wastes were rarely cycled back to their point of origin in large-scale agriculture, plaintively pondered, “Can anything be done at this late hour by way of reform? Can Mother Nature secure even a partial restitution of her manurial rights?” . While unclear if he was aware of Marx’s views on social metabolism , Howard echoed the concerns of Liebig, Marx, and Engels. Noting that “the Chinese have maintained soil fertility on small holdings for forty centuries” and inspired by the traditional farming practices he witnessed around him in the colonies, Howard championed compost use over chemical fertilizers and pondered a possible transformation of the industrial model where waste would be cycled back to farmland. In this same tradition, mending ecological rift via the recycling of organic waste is central to urban agriculture across the globe. This concept of returning nutrients to agricultural soils in the form of urban waste is vital to overcoming the “antithesis between town and country” and is fundamental to a “restitutive” agriculture. While few urban planners and mainstream development practitioners likely look towards Marx and Engels for inspiration, these obscure passages describing metabolic rift are particularly prescient, relevant not only to the development of sustainable agriculture, but also to urban waste management and the impending environmental crises of mega-urbanization . For millennia, farmers worldwide have maintained soil fertility on small plots through the application of organic waste; urban farmers are no exception. Adapting to the rising cost of chemical fertilizers and stagnant market prices for their produce, urban farmers in many parts of the South rely on intensive applications of manure from urban and peri-urban livestock production, ash, and composted garbage as a free or low-cost fertilizer and soil conditioner.

Periurban livestock producers, in addition to tapping rising urban demand for meat, dairy, and eggs, sell manure to urban market gardeners and to large-scale vegetable farms in the urban outskirts. To profit from compost’s fertilizing potential, farmers frequently cultivate the peripheries of garbage dumps or establish illicit contracts with garbage truck or cart drivers to obtain compost for their fields, paying them to simply dump a load of garbage in their fields while en route to central collection facilities. Advocates argue that redirecting the organic fraction of waste streams to agricultural production in urban areas and their hinterlands will help to boost soil fertility, as well as reduce soil and water pollution arising from heavy agrochemical use and large concentrations of waste deposited in landfills, dumps, and waterways . Yet to truly close the nutrient cycle and diminish the impacts of this ecological rift, human waste from urban consumers would need to be returned to the crops’ fields of origin. Every day, on average, every human produces 1 to 1 ” kg of nutrient-rich feces. Human waste, or “night soil”, is a common source of organic fertilizer in urban and peri-urban agriculture, though less commonly promoted due to cultural biases and to the higher public health risks associated with its application. Despite the social stigma, foul odor, and contamination risk of its use, there is stiff competition among farmers for access to night soil. Inone study, two-thirds of farmers surveyed in two peri-urban zones in northern Ghana used human waste in their fields . In China, in particular, flower harvest buckets application of human waste to farmland has been central to both urban waste management and agricultural production, but has been diminishing as rapid industrialization and urbanization transform agricultural production at the urban edge . While such forms of restitutive soil fertility management In the Global South generally arise from creative exploitation of limited resources and adaptation to limited access to land, fertilizer, and credit, they have been celebrated by urban farming advocates worldwide as fundamentally sustainable practices. In North America and Europe, where the discourse of ecological sustainability generally informs urban agriculture practice, the age-old nutrient cycling practices used in the Global South are a cornerstone of urban agriculture advocacy. Practices such as compost application, planting of nitrogen-fixing cover crops, and incorporation of crop residues are presented as a sustainable way to close the nutrient cycle and reduce urban ecological footprints. Indeed, application of compost to urban soils can also provide other environmental services, such as reducing erosion, improving drainage and water holding capacity, controlling pathogens, and immobilizing heavy metals. For commercial growers in peri-urban areas, a growing consumer demand for local and organic food often drives the transition to more ecologically-sound farming practices. A growing number of municipalities collect green waste for composting. Much of the compost is sold at low cost or provided for free to local farmers, landscapers, and gardeners.

Infrastructure for the collection, composting, and distribution of compost seems to be the greatest hurdle preventing urban agriculture’s ability to minimize ecological rift in nutrient cycling. Nevertheless, development workers and planners are optimistic about its role and argue that with improved waste management technology, access to land, and policies favoring agricultural production in urban areas, urban agriculture can contribute significantly to feeding the world’s cities and mending ecological rift by restoring “Nature’s manurial rights”, rescaling production to a more local level, and relying less on petroleum-based inputs and other crossscalar subsidies.Understanding this social rift is not only essential to explaining urbanization, but to elucidating the linkages between urbanization and the agri-food system. The rise of large- and industrial-scale farming has entailed the consolidation of land and expansion of mechanization and other new farming technologies, both of which reduce the demand for agricultural labor. This was evident in Europe at the dawn of the capitalist era, in the US during the latter half of the 20th century , and more recently in China where as many as 70 million farmers were dispossessed by expanding land markets in the last decade of the 20th century . In the Global South, a host of pressures—structural adjustment programs, land consolidation, drought, war, expansion of natural resource extraction and biofuels plantations—has dispossessed rural populations over the last several decades and fueled the growth of megacities and their slums across the globe . Indeed, as Marx predicted, “Part of the agricultural population is therefore constantly on the point of passing over into an urban or manufacturing proletariat” . Social rift is a central driver of urban agriculture in the Global South, where production of food is often a subsistence activity. Between 70 and 75 percent of farmers in a survey of urban agriculture in Nairobi, for example, produced for household consumption, citing hunger and the need for food as their principal motivation . Similar rates have been found in other parts of Africa, with lower rates in Asia, and Latin America . A recent FAO study revealed that over 30 percent of households in 11 of the 15 countries studied engage in some form of urban agriculture. The results also showed the urban poor are more likely to practice urban agriculture than wealthier city dwellers . Many must therefore improvise new means of survival, particularly in those cities where social services were gutted under structural adjustment during the 1980s and ‘90s. Many embark on small-scale agriculture on marginal plots of land tucked in between housing, industry, and infrastructure, within the city itself or in its immediate hinterlands, in order to buffer themselves from the socio-economic upheaval of dispossession from their land and from the lack of formal employment opportunities in the city and its peripheral slums. The slashing of government jobs under structural adjustment in many parts of the Global South also drove members of the urban professional class to embark on urban agriculture projects to augment their diets, and for those selling on informal local markets, to supplement their income. According to Guyer subsistence and small-scale urban food production, along with the informal food economy to which it contributes, often undermine the expansion of more formal markets. At the same time, however, self-provisioning effectively subsidizes the cost of social reproduction within the larger capitalist economy ; in short, wages can stay lower if workers are feeding themselves, ultimately facilitating the accumulation of capital.29 Urban agriculture therefore exists in tension with capital, arising as a strategic response to social rift on one level by exploiting underutilized land and buttressing against the expansion of commercial agri-food markets in poor areas, while subsidizing ongoing accumulation on a more macro-level.

The proportion of variance explained by the models was calculated by measuring the adjusted D2 value

A few isolates from cultures inconsistent with Bot. characteristics were randomly selected from each site to amplify and sequence to verify our morphotyping method. The internal transcribed spacer region 1 and alpha-elongation factor-1 genes were amplified using PCR primer pairs ITS1F and ITS4, and EF1-728F and EF1-986R, respectively, using methods modified from White et al., and Slippers et al . Successfully amplified samples were sequenced using the UC Berkeley Sequencing Facility .The severity of Bot. infection was calculated as the isolation frequency per site. Data were square-root transformed when necessary to meet the assumptions of normality. Differences in mean Bot. infection severity between elevation categories were calculated using one-way ANOVA with Tukey’s HSD for post-hoc analysis with R Statistical Software . Correlations between actual elevation and Bot. infection severity were assessed using simple linear regression and ANOVA to test for significance . Generalized linear models were developed to identify patterns of dieback, with dieback severity values as the response variable, and elevation , Bot. infection severity , and aspect as possible explanatory variables. If multiple models received substantial support , the best model was confirmed by calculating the relative importance of each term based on the sum of their Akaike weights . This study provides definitive support for the hypothesis that shrub dieback, during a recent drought, and pathogen infection are strongly related in a wild shrubland setting. This is the first known quantitative support for the hypothesis that in A. glauca, an ecologically important shrub species in the study region, wholesale plant containers dieback is related to pathogen infection occurring along an elevational gradient.

As expected, N. australe and B. dothidea were the two most frequently retrieved pathogens across all sites, however N. australe, the introduced pathogen, had almost twice the abundance of B. dothidea. N. australe is driving the correlation between elevation and Bot. infection, as the frequency was greater at lower elevations compared to upper elevations, while B. dothidea abundance did not change significantly across elevations. Level of Bot. infection was confirmed to be a significant predictor of stand-level dieback severity. The data also confirm that stand dieback severity is generally greater at lower elevations, which in this region experience higher temperatures and lower annual rainfall than the higher elevations sampled. While the presence of Bot. species has been reported previously in Santa Barbara County, this study represents the first effort to understand the abundance and distribution of Bots occurring in natural shrublands, and the first wildland shrub survey of Bots across a climate gradient. The high frequency and wide distribution of Bots retrieved from our study sites support the hypothesis that Bot. species are widespread across a natural landscape, and likely contributing to the extensive dieback resulting from the recent drought. Bot. fungi were retrieved from nearly every site in this study . We could not determine Bots. presence from three sites due to contamination issues. The broad extent of the study area suggests that infection is widespread in the region, and likely extends beyond the range of our study. While both N. australe and B. dothidea together made up the most frequently retrieved pathogens, our data show that N. australe has a larger distribution and occurs in greater abundance across the study region than B. dothidea . This trend was consistent across all elevations, but particularly at lower elevations . One possible explanation for this is that N. australe, being a recently introduced pathogen, spreads more rapidly as an exotic species in A. glauca compared to B. dothidea, which has been established in California for over 150 years .

This hypothesis is consistent with previous studies that have shown variations in Bot. species abundance and virulence in Myrtaceous hosts occurring in native versus introduced ranges . However, it is difficult to evaluate the incidence of B. dothidea and N. australe in the present study in relation to historical documentation since many species in the Bot. complex have, until recently, been mischaracterized . Only with the recent development of molecular tools have researchers begun to accurately trace phylogenetic and geographic origins of Bot. species. Such studies are beginning to elucidate the complex existence of Bot. fungi as both endophytes and pathogens around the world, and much more research is needed to understand their pathogenicity in various hosts under different conditions. Nevertheless, it remains clear from our study that Bot. species, particularly N. australe, are both abundant and widely distributed in this region, and are important pathogens in A. glauca shrubs.Because Bot. taxa were the most frequently retrieved pathogens and were significantly correlated with dieback, we believe that they drive A. glauca dieback. Further, stand dieback severity increased significantly with Bot. infection. This is not to say that other pathogens do not also contribute to disease symptoms, but we found no evidence of any other pathogens occurring in such high incidence as Bot. species. While Brooks and Ferrin identified B. dothidea as a likely contributor to disease and dieback in dozens of native chaparral species during an earlier drought event in southern California, and Swiecki and Bernhardt found B. dothidea in association with a dieback event in stands of Arctostaphylos myrtifolia in northern California, our study yields the most extensive results of Bot. infection and related dieback in a chaparral shrub species across a landscape.

Further, our study resolves species identity within the Bot. clade and highlights the role of the recently introduced pathogen, N. australe.A significant finding in this study was the relationship of Bot. infection and dieback with elevation. Bot. abundance and dieback were both found to be greatest at lower elevations, which was driven mostly by the high frequency of N. australe retrieved at these sites. This represents the first quantitative evidence supporting that A. glauca vulnerability to fungal infection is influenced by stress levels along an elevation gradient. A similar pattern was observed in northern California by Swiecki and Bernhardt , who suggested that dieback in Ione manzanita infected with B. dothidea was greater in drier sites compared to more mesic ones, although no comparison of infection rates between sites was conducted in their study. The elevation gradient in our study was used a proxy for stress levels because annual precipitation decreases with decreasing elevation within our study region . Higher temperatures, which are associated with lower elevations, are also known to play an important role in drought-related mortality, as water loss from evapotranspiration is increased . Furthermore, unpublished data for dry season predawn xylem pressure potentials on a subset of sites along the same elevational gradient revealed more negative water potentials in A. glauca at lower elevations compared to upper elevations as spring and summer drought sets in . Thus, there is evidence that shrubs at low elevations indeed experienced the greatest water stress during the 2011-2018 drought, which predisposed them to higher levels of Bot. infection and enhanced dieback compared to upper elevation sites. More in-depth studies on the microbial communities and fungal loads of healthy and diseased shrubs throughout the region would help elucidate such trends. Another possibility for the higher incidence of Bot. infection at lower elevations is that the lower ranges of A. glauca populations in Santa Barbara are often located adjacent or in close proximity to agricultural orchards, ranches, and urban settings, which are common sources of plant pathogens, including Bots . Eucalyptus, avocado, and grapevines, which are abundant in these areas, are particularly well-known Bot. hosts and potential facilitators of Bot. introduction . Therefore, sources of inoculum from nearby populations of agricultural and horticultural hosts could be responsible for continual transmission Bots in wildland A. glauca populations, and would likely result in greater rates of infection at lower elevations. Furthermore, many of the lower sites in the survey were located near roads and/or trails, which are often subjected to additional stress from human activity like pruning and trail clearing; activities that are known to spread and promote infection by Bot. pathogens . While we avoided sites that showed signs of such activities in our survey, we cannot rule out the potential contributions of proximity to human encroachment to the overall higher rates of Bot. infection across the lower elevation zone. It is worth noting that while our study revealed a trend of increased dieback in lower elevations, some upper elevation sites also exhibited high levels of dieback, and Bot. fungi were retrieved from many of these sites. Upper elevations also experienced significant stress during the 2011-2018 drought, plastic pot manufacturers and water-related microsite variables outside the scope of this study like slope, solar incidence, soil composition, and summer fog patterns factors likely contributed to increased stress and subsequent dieback. Additionally, N. luteum, N. parvum, and D. sarmentorum were isolated primarily from upper sites. Host plants in these sites may serve as potential reservoirs for disease because the milder climate conditions promote greater host survival and thus pathogen persistence as endophytes. This serves as an important reminder that continued global change-type drought may eventually jeopardize susceptible species populations even at the upper boundary of their range.Our results are consistent with well-known theoretical models describing the relationship between environmental stress and biotic infection, which generally ascribe extreme drought stress as a mechanism for plant predisposition to disease .

These frameworks illustrate dynamic interactions between environmental stress, plant hydraulic functioning and carbon balance, and biotic attack, and a growing body of research has focused on understanding the roles of these factors in driving plant mortality, especially during extreme drought . While the data collected in this study do not directly address the specific mechanisms leading to Bot. infection and dieback in A. glauca, our results can be discussed in the context of how life histories and physiological adaptions elicit differential responses to drought in woody plants, particularly in chaparral shrubs. For example, shallow-rooted, obligate seeder shrubs like A. glauca have been shown to be more susceptible to drought-induced mortality during acute, high intensity drought than deep-rooted, resprouter shrubs . This supports our observations of pronounced A. glauca decline during an historic California drought compared to nearby resprouter species like chamise , and laurel sumac . Additionally, physiological mechanisms related to drought tolerance may further explain predisposition to disease in A. glauca. For example, high resistance to cavitation is a common trait associated with more dehydration-tolerant species like A. glauca that maintain hydraulic conductivity during seasonal drought . While cavitation resistance is thought to assist in the continuation of photosynthetic activity even at very low seasonal water potentials , it has also been associated with greater mortality rates during high intensity drought in a variety of woody plant systems including mediterranean shrublands , temperate deciduous forests and eucalyptus forests . High resistance to cavitation requires heavy carbon investment for stronger and denser stem xylem tissue , which can result in limited carbon for investment in defense against pathogens like Bot. fungi. Furthermore, colonization of pathogens during drought may further disrupt the carbon balance of plants as it influences defense and repair, creating a feedback loop that can drive plants toward a mortality tipping point . Thus, while dehydration tolerance may be important during typical seasonal drought conditions ), it may be a much riskier strategy and lead to greater mortality during global-change type drought, especially in the presence of pathogens. These frameworks are consistent with our findings and provide further evidence that A. glauca experiencing acute levels of drought stress are highly predisposed to Bot. infection particularly at lower elevations that experience heightened levels of water stress. The results of this study provide strong evidence that A. glauca in the study region are vulnerable to Bot. disease and dieback, and possibly eventual mortality, related to acute drought. This is consistent with Venturas et al. , who found that acute drought in 2014 led to reduced abundance in A. glauca and other obligate seeder chaparral species and even type-conversion in the Santa Monica Mountains of southern California, USA. A review by Jacobsen and Pratt found similar consistencies among shallow-rooted, obligated seeding shrubs. Clearly, there is strong support that A. glauca populations are at risk for future dieback, and thus should be the focus of more intense studies aimed at understanding the possible mechanisms driving such events. Manzanita are important members of the chaparral ecosystem and large-scale dieback and mortality of this species could reduce resource availability for wildlife , as well as increase the risk of more intense, fires in an ecosystem already associated with increasingly frequent fire activity.

STEC has been identified in indoor raised swine herds but comparison studies are lacking

Prevalence of STEC in domestic pigs reared outdoors on diversified small-scale farms from Chapter 1 was lower than this current study, but had a similar sample size . Samples were collected in 2018 for Chapter 3 and 2015-16 for Chapter 1. Differences in STEC prevalence between 2015-2016 and 2018 may be due to different laboratory processing methods or environmental factors. Both study periods were drought years in California; however, 2017 was a very wet year, which may have affected 2018. Three farms participated in both studies and all three farms saw increases in STEC prevalence between 2015-16 and 2018: Farm 1 had a 5.13% STEC prevalence in 2015-16 compared to 20.00% in 2018, Farm 2: 0% STEC prevalence increased to 83.33% and Farm 3: 11.11% to 66.67% . However, a smaller number of samples and animals for Farm 2 and 3 accounts for some of this seemingly large increase between studies. A 2018 study conducted in Georgia reported 62.5% STEC in organic “free ranging” domestic swine, but reported a small sample size of eight. Differences between STEC prevalence may be due to different study designs, laboratory tests, environmental factors or farm management practices, such as the density of pigs raised in each paddock. The scarcity of data regarding STEC in swine raised outdoors indicates a need for future studies. Studies measuring the prevalence of STEC in feral pig populations in the US are infrequent, unlike European studies. A 2006 US study sampled swine necropsy and fecal samples and reported 0 – 23.4% prevalence E. coli O157:H7 in feral pigs. A 2018 study conducted in Georgia detected an overall STEC prevalence of 19.5% in feral swine and they identified a higher prevalence of STEC in feral pigs sampled in agricultural counties.

Feral pigs are attracted to agricultural areas because of resource availability , grow raspberries in a pot and their direct or indirect contact with livestock may create a risk of food borne pathogen transmission. The risk of pathogen sharing between feral pigs and domestic swine has been studied, but only a small subset of these studies investigated the risks to outdoor based pigs, even though there have been multiple cases of feral pigs transmitting pathogens, such as Brucella suis, to domestic swine raised outdoors. Wyckoff et al concluded that increasing populations of feral swine are a risk for the reintroduction of eradicated diseases as well as emerging TBD, especially for backyard operations that allow domestic swine outdoor access, because male feral pigs are attracted to female pens. In a Corsica study that focused on traditional pig farms that raise their animals outdoors, the authors determined that a significant risk factor for the spread of diseases between wild boars and domestic swine was interactions between these two swine groups. Our study results indicated that 45.45% of farm participants had seen evidence of feral pig presence on their farms. Schembri et al conducted a questionnaire of backyard and small-scale swine producers in Australia and found that a third of producers, both indoor and outdoor, had seen feral pigs on their farms. Understanding the prevalence of STEC in feral pigs, combined with the aforementioned study results indicating that these animals reside near resource-rich farms, highlights the need for further studies to address the risk of disease transmission associated with feral pig presence near operations that raise swine outdoors. Serotypes identified in this study that can cause severe human illness included E. coli O157:H7 , O26:H11 and O103:H11 . The serogroups O26:H11 and O103:H11 contained only the stx1 gene, not stx2.

The only O103:H11 serotype contained both eae and ehxA and all the O26:H11 isolates contained the eae gene, with five O26:H11serotypes also containing the ehxA gene. A study by Cha et al also found O26 with stx1 and eae in commercial swine raised indoors in Ohio, US. A study conducted in finishing swine, measured 6.9% of positive samples were O26 and 2.4% contained O103. In 2017, the US Food Safety and Inspection Service conducted a Raw Pork Baseline Study to determine the prevalence of STEC in various types of pork products at slaughterhouses and processing facilities and measured a prevalence of 0.2% STEC, mostly in comminuted pork products. However, this study only looked for the top seven STEC serogroups, even though 309 other samples were positive for key virulence factors like stx and eae genes. Additionally, on-farm or slaughterhouse swine samples may reflect different prevalence ranges than meat products. Considering most studies identified E. coli O157:H7 and non-O157 STEC serotypes that cause human illness in swine samples, pigs should be considered an important reservoir of STEC, and mitigation strategies established to prevent the spread of food borne pathogens from farm to consumer. Significant risk factors associated with the presence of STEC in fecal samples collected during this study included distance from the nearest surface water and whether domestic swine had access to wild areas, such as forest or wetlands. These variables were measured as a proxy for suitable feral pig habitat that borders farms. Feral pigs are reservoirs of STEC, and surface water and/or wild areas provide habitat for these animals to exist near OPO. For instance, a study by Rutten et al predicted suitable habitat for wild boar in Belgium and identified forest , as a significant predictor. Additionally, Wu et al reported distance from a forest to be a significant risk factor for contact with wild boars in Switzerland, especially those domestic pigs less than 500 meters from a forest. A 2017 study reported that distance to water affects feral pigmovement mostly in states where water is scarce versus states where water is more prevalent .

Additionally, feral pigs may contaminate these habitat areas, which may lead to indirect STEC transmission to swine raised outdoors, as studies have shown that STEC can be transmitted through contaminated surface water sources and the environment. A 2014 study conducted in the Central Coast of California detected E. coli O157:H7 and non-O157 in many water sources. These results indicate a need to separate domestic swine raised outdoors from wild areas to avoid direct or indirect transmission of pathogens from feral pigs. In this current study, only the juvenile age group, which included weaners, finishing and market swine , was significant when compared to adults. Many US and international studies have tested similar-aged pigs at slaughterhouses and reported a wide range of STEC prevalence. A study by Tseng et al sampled finishing pigs , which are included in our juvenile category, and determined that the highest prevalence amongst three cohorts occurred between 14-18 weeks of age. At 24 weeks, STEC prevalence in all cohorts had dropped and ranged from 0 – 6.7% in the three groups. This same study mentions that the finishing age group are most susceptible to STEC oedema, which is caused by E. coli strains carrying the stx2 gene and may be associated with detecting STEC in this juvenile age category. A longitudinal study conducted by Cha et al in commercial indoor domestic swine found that 68.3% of finishing pigs shed STEC at least once during the study period, which showcases the intermittent nature of STEC shedding in swine. The high prevalence identified in this study might be due to repeated sampling over a longer period of time than conducted in our study. Additionally, our study sampled all ages of swine only once, best grow pots which might indicate an under reporting of STEC in our results. The effect of age on STEC shedding is more frequently reported in cattle versus swine. For instance, a Raies et al study sampled beef cattle and reported that STEC prevalence was highest during the first six months of life and then decreased toward adulthood. Another study by Cho et al also detected that calves over one month old were two times more likely to shed STEC than those younger than one month, except for pre-weaned calves. If age is a risk factor for STEC shedding in swine, then targeting key age groups for STEC mitigation strategies to reduce the overall bacterial load in slaughtered swine may reduce the risk of these pathogens in the food supply. Limitations of this study included a small sample size for the total number of farm participants as well as the final number of feral pig samples collected, as we could only gather feral pig feces in three of the six targeted counties. The post-hoc power calculation results were 0.12 for feral pigs and 0.69 for OPO, which indicated that the prevalence estimates are inexact. Moreover, many of the significant variables in the final logistic regression model had wide confidence intervals, which indicates less precise estimates.

Since this was a cross-sectional study conducted only during two seasons and only one season per farm, we may have missed STEC positive farms due to seasonality of shedding or other factors that affect STEC detection in feces, including the intermittent nature of shedding in pigs. Our study participants volunteered and therefore we could not conduct random sampling; our study results contain selection bias and are not generalizable to other OPO in California or the US. Strengths of our study included measuring STEC in both feral pig and outdoor reared pigs in California. This study is an innovative approach toward evaluating areas of contact between feral and domestic pigs reared outdoors, by targeting STEC surveillance based on a risk map built in Chapter 2. Moreover, assessing STEC prevalence in feral pigs near OPO serves as a proxy for the risk of exposure and transmission of other zoonotic pathogens to domestic pigs reared outdoors. Future research studies could enhance our current study results by comparing STEC strains between the two swine groups using WGS bio-informatic analyses. Similarity of STEC isolates can be used as a biological indicator to track possible transmission of diseases between feral and outdoor-raised swine, as noted in a few recent studies.The three scientific research projects in this dissertation added important epidemiological information to the body of knowledge regarding STEC detected on DSSF in California and the risk of potential disease transmission from suitable feral pig habitat located near domestic pigs raised outdoors. Although consumers perceive small-scale farms or outdoor-raised meat as safer and more natural, these studies together demonstrate that even livestock raised outdoors on small-scale farms are reservoirs for STEC, including serogroups that cause severe illness in humans, including O157:H7, O26, O103 and O111. Interestingly, Chapter 1 and 3 models indicated that livestock raised outdoors that have access to wild areas, such as wetlands or forest, was a key risk factor for the presence of STEC. Chapter 2 results revealed that nearly 50% of domestic pigs raised outdoors are located near suitable feral pig habitat, and this overlap of feral and domestic swine could be a risk factor for potential emerging or reemerging disease transmission. Also, STEC was detected in domestic swine in both Chapter 1 and 3, even though pigs are currently considered a low risk key species for STEC outbreaks by the US FSIS. These study results indicate the need for further studies on DSSF to ascertain risk factors for food borne pathogens. The objective of Chapter 1 entailed conducting an overall assessment of prevalence and risk factors of STEC on diversified small-scale farms in California, while also describing the unique characteristics of DSSF. Temperature was a key risk factor identified in the final multilevel logistic regression model. Many food borne pathogen studies indicate season as a risk factor for STEC, however, seasons vary across the US. For instance, California summers are characterized by dry heat whereas summers in most states are humid and hot. Measuring and monitoring temperature during field sampling may be a more precise indicator of risk than season and allow for more accurate comparisons between studies. Additionally, as weather patterns shift due to climate change, assessing environmental factors, , as a risk factor for the presence of STEC on farms will be useful for stakeholders, to understand how weather affects the presence of food borne pathogens in livestock raised on DSSF. Studies elucidating whether ambient temperature affects survival of STEC in a farm environment or whether temperature affects the host animal harboring STEC will be useful, especially as extreme climate events become more common.