Category Archives: Agriculture

Theory and experience suggest that the most successful pollution prevention tools are performance-based

In the U.S. and Canada, point source dischargers must obtain permits to release emissions, whereas non-point source dischargers largely remain uninhibited by federal mandates . In these WQT programs, point sources trade with other point sources to avoid costly discharge reductions at their industrial facilities, and only a handful of non-point sources are involved on a voluntary basis . On the limited occasions that the agricultural industry does engage in trading, farm non-point sources almost always assume the roll of “sellers” in the program, rather than “buyers” . Under such circumstances, point source dischargers pay non-point sources to comply with water quality standards , creating a profit-making opportunity for agricultural pollution This lopsided relationship between point and non-point sources highlights another related problem: the absence of a fully capped trading system. Though trading schemes show promise in transitioning the regulatory framework from individual discharge limits to river basin management based on group controls, for the system to realize its full potential, all dischargers—point and non-point—must participate . A further complication, both in partially- and fully-capped WQT systems, is that of accounting for differences in emission loads between point and non-point sources. WQT programs utilize a trading ratio to calculate how many units of estimated non-point source loadings should be traded with a unit of point source loadings . Because of the uncertainty of non-point source loadings, trading ratios are almost always set at 2:1 or greater to create a margin of safety . In this scenario, point sources must purchase two units of estimated non-point reductions for every unit of excess emissions. Interestingly, a study on trading ratios found that political acceptability, rather than scientific information, determined ratio calculations . Despite the challenges,blueberry in pot several notable successes have demonstrated that enforced group caps, emission allocations, and water quality standards can be met.

For example, in 1995, farmers from the San Joaquin Valley, California implemented a tradable discharge permit system to enforce a regional cap on selenium discharges. The selenium program set a schedule of monthly and annual load limits, and imposed a penalty on violations of those limits . In Canada’s Ontario basin, a phosphorus trading program was established in which point sources purchase agricultural offsets rather than update their facilities . A third-party, South Nation Conservation, acts a facilitator, collecting funds from point sources and financing phosphorus-reducing agricultural projects. It is estimated that the program has prevented 11,843 kg of phosphorus from reaching waterways . Numerous other pilot trading projects show promise, but need a serious overhaul if they are to realize their full potential. One prominent example worth mentioning is the U.S.’s Chesapeake Bay Nutrient Trading program. In response to President Obama’s executive order to clean up the Chesapeake Bay, the largest estuary in North America, the six states contributing pollution to the Bay are in the national spotlight as they figure out how to achieve pollutant allocations. Currently, their plans to meet water quality requirements are falling short . Economic scholars contend that a nutrient trading plan could offer the most cost-effective means for complying with the looming TMDL. But, uncertainty about agricultural sources willingness to participate and what trading ratio is most appropriate as well as high transaction costs remain issues . The most traditional form of command-and-control regulation is performance standards. Though often presented as an alternative to market-based approaches, performance standards can complement a tax or emissions-trading system, and can also be used alongside positive incentive schemes. In an incentive approach, if pollution exceeds a standard then a financial penalty or charge might be triggered, whereas if a farmer is well within compliance, the farmer might receive a positive payoff for their efforts. Standards can also be used in trading through pollution allowances with enforceable requirements .

And in a mandate scenario, standards are compulsory, and may or may not be accompanied with other motivating devices .Performance standards have successfully reduced point source water pollution—E.U.’s IPPC Directive and U.S.’s NPDES program and pollution of other media . Unfortunately, the same suite of challenges—the use of proxies, costs of monitoring and modeling, and uncertainty of environmental outcomes—face performance standards within the context of non-point source abatement. These perceived obstacles have largely precluded the use of performance tools for agricultural NPS control . However, a growing body of literature expounds the benefits of using performance approaches for this industrial sector . Performance measures are used to encourage Best Management Practices . Using models to predict the level of BMP performance can provide powerful decision-making data to farmers, helping them make appropriate management decisions . Performance modeling is most effective when conducted at the field-scale. For example, the Performance-Based Environmental Policies for Agriculture initiative found that the implementation of BMPs, such as changing row directions or installing buffer strips, reduces the risk of pollution to varying degrees depending on several on-farm factors . Allowing farmers to exercise site-specific knowledge in an individualized context highlights an important, laudable feature of performance-based approaches: flexibility . Some suggests that practice-based tools, ones that mandate or incentivize the installation of certain BMPs, are not as cost-effective as their performance-based counterparts . This is largely due to the fact that performance-based instruments provide flexibility to choose the practices that will achieve water quality improvements at the lowest cost .In the case of agricultural water pollution, farmers are the predominant actors targeted for compliance. While logical, since farmers’ management practices influence the amount of pollution that reach nearby water bodies, however it is worth noting that other actors involved in the pollution process could be targeted for regulation.

For example, the control of pesticides has been managed by regulating the chemical manufacturer, imposing mandates or taxes on chemicals sold on the market . This type of tool could be highly effective in reducing the amount of pesticides or fertilizers produced, sold, bought, applied and discharged into water bodies, creating a ripple effect through the whole production stream. Targeting actors further “upstream” is illustrative of what Driesen and Sinden call a the “dirty input limit” or “DIL.” Manufacturing companies are only one of several points along the production stream where the DIL approach could be effective; alternatively, pollutants could be controlled at the point of application. As suggested by the authors, the DIL approach is useful beyond the tool choice framework in that it provokes a new way of thinking about environmental regulation. Among the least invasive , but most important instruments for successful NPS management, capacity tools provide information and/or other resources to help farmers make decisions to achieve societal and environmental goals. Capacity tools are typically associated with voluntary initiatives rather than mandates . Because it can be difficult for farmers to detect the water quality impacts of their practices visually ,plastic planters wholesale learning and capacity tools become an invaluable means of conveying information to farmers. Farmers’ perceptions of the water quality problem and their role in contributing to pollution are one of the most influential factors in changing farming management practices . In California, the Resource Conservation Districts, University of California Extension, and the University of California’s Division of Agriculture and Natural Resources are examples of local government agencies providing capacity building services that include knowledge, skills, training and information in order to change on-farm behavior. In summary, each policy tool possesses strengths and weaknesses, which need to be taken into consideration when developing more effective ways to control agricultural pollution. An integrated approach, one that utilizes a diversity of policy instruments to address water quality issues in agriculture, is required. River basin management plans , or the “watershed approach” as it is often referred to in the U.S., can more appropriately tailor their choice of policy tools to local conditions. Authority has been granted to achieve water quality objectives at the regional jurisdictional level. The success of these programs will largely depend on the wisdom and will of those regional governmental leaders , as discussed below.What are the major similarities and distinctions between different approaches to agricultural non-point source pollution regulation available in the U.S. and Europe? And, which are most effective? This chapter examined the defining characteristics and application of six policy tools, each of which have been proposed for agricultural pollution abatement. As noted in the introduction, the task of comparing tools is complicated by the multiple facets and dimensions embedded in each tool . While research suggests that a mix of policy tools will outperform any one instrument , clear strengths, weaknesses and unique traits distinguish tools from one another and should be taken into consideration when regulators choose means to meet environmental goals. Table 2-1 lists several categories by which to compare a select group of policy tools. As the table illustrates, a number of key relationships are particularly important. Emphasis is placed on the difference between tools tied to emissions and those not tied to emissions. The clear benefit of tools tied to emissions is their ability to track and measure environmental improvements. However, therein lies these tools’ biggest weakness: Reliance on proxies to predict the extent of environmental improvements.

The information burdens needed to construct models that adequately predict the impact of a farm’s discharges are so great that many practitioners and scholars have shrugged off the task as impossible. Encouragingly, a growing body of literature and scholarly discussions show prospect for improved computer simulation efforts. Until more robust models are designed with improved information, policymakers will continue to rely on the second category of tools—those not tied to emissions. Tools untethered to specific pollution targets work by encouraging water quality improvements through incentives, contracts and/or information. These tools tend to be more politically favorable, but less effective by themselves, save one—the dirty input limit. While capacity tools can provide important information to farmers and best management practices may improve water quality, the DIL can prevent pollutants from ever reaching rivers and lakes, or even farms. With the U.S. pesticide and storm water regulatory programs as models , regulating inputs has the potential to achieve more than regulating emissions. But the DIL is not without obstacles, including heavy reliance on scarce information to set the appropriate limitations and political will to restrict chemical or fertilizer production and/or use. Non-point source pollution, or pollution that comes from many diffuse sources, continues to contaminate California’s waters . Agricultural non-point source pollution is the primary source of pollution in the state: Agriculture has impaired approximately 9,493 miles of streams and rivers and 513,130 acres of lakes on the 303 list of waterbodies statewide . The 303 list is a section of the Clean Water Act mandating states and regions to review and report waterbodies and pollutants that exceed protective water quality standards. Agricultural pollution in California’s Central Coast has detrimentally affected aquatic life, including endemic fish populations and sea otters, the health of streams, and human sources of drinking water . Despite the growing evidence of agriculture’s considerable contribution to water pollution, the agricultural industry has, in effect, been exempt from paying for its pollution, and more importantly, has failed to meet water quality standards. How to best manage and regulate non-point source agricultural water pollution remains a primary concern for policymakers and agricultural operators alike. This case study focuses on the Conditional Agricultural Waiver in California’s Central Coast, the primary water pollution control policy in one of the highest valued agricultural areas in the U.S. The Central Coast Regional Water Quality Control Board is under increasing pressure to improve water quality within its jurisdiction, especially with the added onus from a 2015 Superior Court ruling that directed the Regional Board to implement more stringent control measures for agricultural water pollution. Pressure on the Regional Board is exacerbated by regulatory budget constraints, interest groups, and by unanticipated events. Given these pressures, choosing appropriate criteria by which to evaluate the success of California’s primary agricultural water quality policies is complicated, but of critical importance. This policy analysis explores the complex process of negotiations, agendas and conditions at the heart of policy-making, highlighting areas where the 2004 and 2012 Ag Waiver has succeeded in achieving its goals, as well as where it has fallen short. The analysis is divided into two parts.

Marshall escorting the new legal owners attempted to evict the tenants of the Mussel Slough ranch

Moreover, what is interesting is how literary form follows, informs, or accompanies these forms of Social Darwinism. In the U.S., literary naturalism accompanies biological racial theory, and food secures a sense of nature that spans the range from agricultural production to physical consumption. In China, it is popular songs and literary representations of discussion, of liberal exchange of ideas, that attempt to call the new national community into being. Here artists demystify the commodification of food in order to map unequal trade relations and advocate for independence based on food sovereignty.Explaining why he wrote The Octopus, Frank Norris said that he believed the settling of the American West had been of such world-historical import that it deserved to be told in a great work of literature. His view of the West was heavily influenced by Frederick Jackson Turner’s famous thesis, in “The Significance of the Frontier in American History” , that the frontier had been the decisive factor in factor in shaping a distinctively American culture, and moreover that this period was now at an end. When the 1890 census found that nearly all “frontier” land had been occupied, this meant that the first chapter of American history was over, while the next chapter remained unclear. Thus Norris wanted to celebrate the frontier, but also to memorialize it, to monumentalize it in a loftier literary form than the popular western genre fiction. Having studied the form of the medieval romance at the University of California, he dreamed of seeing a Song of Roland for modern America, a song of the West. He planned a trilogy of novels, or following his interest in medieval literature,10 plastic plant pots what we might call a song cycle. The first novel, The Octopus, was based on a historical event, known as the Mussel Slough Incident, a deadly 1880 land dispute between the Southern Pacific Railroad and wheat growing ranchers in Tulare county, in California’s central valley.

Ostensibly weighing the conflicting interests of the ranchers and the railroad, The Octopus is ultimately more interested in placing the Mussel Slough incident within the larger geographical scale of the emergent global wheat trade and the larger temporal scale of the closing of the frontier. Following The Octopus’s description of wheat production on newly-industrialized California farms, the second book, The Pit , traces the wheat’s distribution through commodities markets in Chicago, and the never-completed third book was to cover consumption “in a famine-stricken Europe or Asia,” as he wrote in a synopsis . The song of the West turns out to be the story of the expanding global market for American agricultural commodities. Norris’s epic scope did not prevent him from conducting detailed historical research into the Mussel Slough Incident itself. The dispute centered on the price at which the Southern Pacific would sell the land abutting the railroad, which had been granted them by the federal government. The railroad circulated advertisements soliciting the public to lease the land from them temporarily, apparently with the option to purchase it for between $2.50 and $5 per acre. The ranchers who leased these large plots of land pooled their capital to build an irrigation system that transformed the arid region into productive farmland for wheat and hops. Once the crops were a success, however, the railroad declared that the land would be sold at market value between $17 and $40 per acre, and that the tenants would have to either pay or move out. In response, the ranchers organized a Settlers’ Land League and armed themselves to defend their claims. On May 11, 1880, a U.S. In the shoot-out that followed eight men were killed, most of them ranchers shot by one of the new owners. While many readers at the time of the book’s publication praised its attack on the railroad monopoly and support for the common farmer, later generations have emphasized that Norris portrays the ranchers as capitalists who care more about windfall profits than about hard work or the land, the traditional virtues of Jeffersonian agrarianism.

Indeed, the author emphasizes the ploy of the Settlers’ Land League to influence the election of a state commission that would favor their side in the legal case—when this corruption is exposed near the end of the novel the ranchers lose their popular support. Norris maintains a distance from the ranchers by telling much of the action from the perspective of an outsider, Presley, who is a San Francisco poet visiting his friend, Buck Annixter, one of the ranchers who will eventually be killed. Presley is hoping, like Jack London and Norris himself, to write the first great literary work expressing the essence of the American West. There is some disagreement among scholars over how the land is portrayed in the novel, and in this it is helpful to note that Presley tries out multiple writing styles as his view of area changes. In the first chapter Presley witnesses the beauty of the natural environment, and goes on to record it in a pastoral celebration of beauty and harmony. In a strange ending to the chapter, Presley repeats word-for-word in his writing long passages that had appeared as narrative description ten pages earlier, and in this way Norris self-referentially emphasizes both the centrality of Presley’s perspective and also that the novel itself is a work of descriptive writing. In the next chapter, however, the pastoral landscape is replaced by images of the massive new farm equipment used in planting the wheat, which Norris depicts this in graphic terms as the sexual union between the machine and the earth. At this point, Presley is forced to confront the land dispute and the competing economic interests that are driving the industrialization of agricultural commodities, and attempts to incorporate these into an enlarged view of the West. The industrialist Cedarquist assures Presley and the ranchers that the continued expansion of American agriculture depends on reaching the inexhaustible demand of the China market. A famine in India provides the opportunity for him to arrange a humanitarian shipment of grain, which serves as a test run ahead of increasing transpacific exports.

After the victory of the railroad, Presley tries his hand at politically committed poetry, publishing a successful georgic poem titled “The Toilers.” Local attempts at political mobilization fall apart, however, after the Settlers’ League’s conspiratorial plot to influence the commission is exposed. Resigned to the power of industrial progress, Presley decides to accompany Cedarquist’s famine relief voyage. The novel ends with him looking out to sea,plastic pots large as he decides that his friends’ deaths do not mean much in the grand scheme of things. All is for the best in the best of all possible worlds, for toilers may come and go, “But the WHEAT remained” . Because it is ultimately the story of large-scale natural and historical forces that dwarf the characters’ moral choices, The Octopus is classed as work of literary naturalism. Florian Frietag points out that while all farm novels must feature natural forces to some extent, it is the total failure of the characters’ attempts to influence the social world around them that gives The Octopus a specifically naturalist form as compared to most American farm novels. At the same time, I believe it is also worth keeping in mind Norris’s own preferred formal terms from ancient and medieval poetry rather than modern prose, the epic and the romance. It is an epic because it is intended as telling the heroic story of a whole people. And yet it is a “naturalist epic” in that, however improbably, humans ultimately give way to the wheat as true hero of the West, uncontainable as both a commodity and a natural force. All previous work on The Octopus addresses political economy is some way, and just as Norris intended to write one novel each on production, circulation, and consumption of wheat, commentators have tended to focus on one of these moments in the economic sphere as it was organized at the turn of the twentieth century. Environmental critics from Leo Marx to William Conlogue have focused on the rural scene of production and shifting generic conventions for representing it.

Critics primarily interested in naturalist form, such as Walter Benn Michaels and Mark Seltzer, have focused on circulation during the late-nineteenth-century financialization of the economy. Finally, critics focused on race and imperialism, such as John Eperjesi and Colleen Lye, have focused on the export to China and the Chinese cooks on the ranch. What reappears across much of this criticism that focuses on the new economy, however, is a tendency to downplay the land dispute at the center of the plot, since the ranchers are themselves capitalists engaged industrial agriculture. The land dispute plot, however, is crucial to Norris’s goal of writing the true history of the West, especially the transition from the frontier period into a new age. By organizing the first book of the “epic of the wheat” trilogy around a real event, Norris’s overall strategy is to record historical reality and celebrate it within a larger, reassuring narrative of enlarged of production and circulation. The reason that there is so much focus on writing and recording in the novel, I argue, is that Norris sees writing itself as crucial to the history of the west, and hopes, through his own writing, to participate in it. What we see throughout the book is a consistent reversal of commonsense causality: production depends on consumption, the stability of the continent depends on overseas empire, and physical production depends on writing and information management. This is how we should understand the relationship between writing and the land in The Octopus: writing is practical, supporting the development of industrial farming to the point of export to China in a new food empire. As portrayed in the novel, the Mussel Slough incident is a symptom of the lack of access to sufficient demand for industrializing U.S. agriculture. For as the ranches become connected to a global food market, they are exposed both to greater opportunities and increasingly volatile risks. Before the production process is even introduced in the novel, Norris highlights the communications technologies that make “the office […] the nerve-centre of the entire ten thousand acres of Los Muertos” . Magnus and his son Harran would sit up half the night watching “the most significant object in the office,” the stock ticker. The occasions for these transcendent feelings of connection are foreign crises that affect the price of their own wheat. Yet because circulation is limited by the railroad—its physical and geographical capacity as well as its monopolistic organization—there is an equally limited amount of profit that the railroad operators and the ranchers must fight over. This is the central contradiction of the novel, as Norris relates the railroad both to a system of veins that facilitates circulation and also an octopus that strangles the full vital force of production. While the ranchers are awaiting the results of their legal case, the character of Cedarquist gives a long speech proposing the China market as the only long-term solution for American production. A former industrialist transitioning into shipbuilding, he addresses the opportunities made possible by the Spanish-American War, speaking as an oracle from the past to the “youngsters” reading the novel at the turn of the century: “Our century is about done. The great word of this nineteenth century has been Production. The great word of the twentieth century will be—listen to me, you youngsters—Markets” . Cedarquist goes on to explain the fundamental problem of the business cycle, that production must expand to stay competitive, but the saturation of the market leads to bankruptcy for most producers and consolidation of industry into fewer large corporations. Faced with certain degeneracy and death, a staple of the naturalist decline narrative, the booster provides a solution that will save the country: “We must march with the course of empire, not against it. I mean, we must look to China” . Empire—like the wheat or the railroad—is propelled by quasi-natural forces that individuals can neither help nor hinder. This speech takes place at the midpoint of the novel, and the development of the plot ultimately vindicates Cedarquist’s logic, ending with the wheat harvest shipping out for famine relief in India, understood as the transpacific test run for the ships that will export future harvests to China.

The findings in this research are also intended to serve as a quantitative tool to support decision makers

Following a global trend, California has undergone a warming trend in recent decades with more rain than snow in total precipitation volume . Increasing temperatures are melting snowpack earlier in the year and pushing the snowline at higher elevations, resulting in less snowpack storage. The current trend is projected to become more frequent and persistent for the region. As a result, surface water supply is projected to erode with time, while the rainfall will experience increased variability, possibly leading to more frequent and extensive flooding . Rising sea levels will also increase the susceptibility to coastal and estuarine flooding and salt water intrusion into coastal groundwater aquifers . In California that sea level is estimated to rise between 150 and 610 mm by 2050 . As the reliability of surface water is reduced due to the effects of climate change, if water reclamation is not implemented with higher market penetration, the demand on groundwater pumping is expected to increase, resulting in higher energy usage for crop irrigation. Our calculations show that for every percent increase in groundwater pumping over 2015 values, the state would consume an additional 323 GWh y-1 of energy generating a net increase of 8 x 104 MTCO2E y-1 . This additional energy usage will amount to approximately 43.7 million USD for every percent increase in groundwater pumping applied to crop irrigation, calculated in 2015 dollars. Further research is warranted to determine the effect of climate change on carbon footprint associated with the energy requirements for irrigation water, particularly for crops grown exclusively for export and how this carbon emission compares with other societal compartments of the energy portfolio. A sensitivity analysis was performed to show the effect of variable k on the overall carbon footprint associated with the energy savings of applying reclaimed water in lieu of traditional groundwater pumping . For this analysis,blueberry container size the k values ranging between 0.3 and 0.7 kgCO2eq kWh-1 were used to account for the different k within a spatial domain analysed in our study.

Furthermore, this sensitivity analysis addresses the global drive to mandate increasing shares of renewables in power generation portfolios . For example, in 2011 California Senate Bill No. 2 requires electric service providers to increase procurement from eligible renewable energy resources from 20% to 33% by 2020 . In 1994, in its General Assembly meeting to combat desertification in countries experiencing serious droughts, the United Nations defined arid and semi-arid regions as areas having the ratio of annual precipitation to potential evapotranspiration within the range of 0.05 to 0.65 . According to this definition, regions in California and other Mediterranean countries such as Chile, Spain, France, Italy, South Africa and portions of Australia are classified as arid and semi-arid regions. Other regions of the world such as Central Asia, South Asia, East and Southern Africa, Central Africa and West Africa also meet this definition. The information presented in our research is intended to serve as a baseline for reference in areas sharing similar climate conditions as defined by the UNCCD. The study found that currently the use of reclaimed water application in California for the agricultural industry is very low, an average 1% for the period 1998 – 2010. For every percent increase in reclaimed water use in agriculture, the resulting energy saving is 187 GWh yr-1 , which at the current energy cost equates to more than 25 million USD. Aside from the energy saving and economic benefit, the application of reclaimed water for crop irrigation also produces a direct safeguard of 4.2 x 108 m3 in groundwater supply and a reduction in carbon footprint of 4.68 x 107 MTCO2E y-1 . If reclaimed water use increased from the current 1%, the energy savings, carbon footprint reduction, and economic benefits were calculated for both the current power generation portfolio and for the projected increase of renewable energy. Even in the scenario of a substantial reduction of CO2-equivalent emissions by meeting and exceeding targets for renewable energy, the increase in reclaimed water use would still provide a net carbon footprint reduction. Figure 4-7 shows the results of our model calculations. This research is intended to serve as a baseline reference and used as a planning tool to help water resources planners. Specific location, availability of reclaimed water supply, conveyance infrastructure and methods of treatment will influence the calculated results and associated costs presented.

Nonetheless, the results of this study furthers our current understanding on the role of reclaimed water on curbing groundwater withdrawal in an arid and semi-arid region like that of Southern California, by providing the context of its existing usage, estimated energy consumption, carbon footprint reduction, and potential monetary savings that can be realized. The trends observed in this study may be applicable to other regions of the world where water scarcity, energy costs, and climatic conditions require the use of reclaimed water as a sustainable water source.The research hypothesis tested true: in fact, the application of reclaimed water not only preserves groundwater resources but also decreases the energy footprint and carbon emissions associated with crop irrigation. The results show that there are savings in both groundwater supply and energy resources when applying reclaimed water for crop irrigation. For California, the average energy requirement for groundwater pumping was 0.770 kWh m-3 while reclaimed water production with gravity filtration was 0.324 kWh m-3 . Hence, the energy advantage of applying reclaimed urban wastewater for crop irrigation over groundwater pumping within this spatial domain would be 0.446 kWh m-3 . The calculated energy savings for applying reclaimed water in lieu of groundwater resulted in 57.9% reduction of energy usage. Annually, this amounts to approximately 187 GWh y-1 of energy savings for California, creating in a reduction of 4.68 x 107 MTCO2E of carbon emission. If reclaimed water use were increased from 1% to 5%, 10%, 15%, or 20%, the respective total energy savings, monetary savings and carbon footprint reduction would increase linearly. Based on the calculations, reclaimed water required the least amount of energy, whereas ocean desalination had an energy intensity approximately 11 times higher. When compared to traditional groundwater pumping, the energy intensity associated with water reclamation was discounted by 58%, highlighting the importance of reclaimed water as a potential competitive source. The results of this study further our current understanding on the role of reclaimed water on curbing groundwater withdrawal in arid and semi-arid regions.

The trends observed in this study may be applicable to other regions of the world where water scarcity, energy costs,growing raspberries in container and climatic conditions require the use of reclaimed water as a sustainable water source. Quantitative research in the field of exported water is still very much underdeveloped despite the many virtual water studies conducted over the years. The data presented in this research can serve as estimate but further research should address the uncertainty. Enhanced procedures to account for exported water and references should be developed and disseminated. These results highlight the need to consider water use efficiency in agricultural irrigation. Our findings suggest that California’s water resources are being exported outside its borders in magnitudes greater than that of the water consumed by the municipalities within the state. Thus, the state might be vulnerable to water-supply constraints if the trend continues indefinitely into the future. With better water management practices and sound public policies and increased investment in water infrastructure and efficiency, farmers and other water users can increase the yield of each water unit consumed. The current scenario appears to promote a positive feedback mechanism of resource draining resulting in environmental consequences for California’s water resources. California agriculture under growing pressure of water is beginning to explore innovative uses of reclaimed water. Some growers already use reclaimed wastewater in different ways, depending on the level of treatment the water receives. Most common is the use of secondary treated wastewater on fodder and fiber crops. Increasingly, however, growers are irrigating fruits and vegetables with tertiary-treated wastewater producing high-quality crops and high yields. Wong et al., reported that the Cities of Visalia and Santa Rosa have developed projects to irrigate more than 6,000 acres of farmland including a walnut orchard with secondary-treated wastewater. Though the projects were primarily designed to reduce wastewater discharge, both cities have gained from the water-supply benefits of applying reclaimed water. The mix of California crops and planting patters has been changing. These changes are the result of decisions made by large numbers of individuals, rather than the intentional actions by state policymakers. California farmers are planting more and more high-valued fruit and vegetable crops, which have lower water requirements than the field and grain crops they are replacing. They can also be irrigated with more accurate and efficient precision irrigation technologies. As a result, California is slowly increasing the water productivity of its agricultural sector, increasing the revenue or yield of crops per unit water consumed. Over time, these changes have the potential to dramatically change the face of California agriculture, making it even more productive and efficient than it is today, while saving vast quantities of water.

In the past two decades, California farmers have made considerable progress converting appropriate cropland and crops to water-efficient drip irrigation. Much of this effort has focused on orchard, vineyard, and berry crops. Recent innovative efforts now suggest that row crops not previously irrigated with drip systems can be successfully and economically converted. This case provides the example of two farmers converting bell peppers row crops to drip irrigation with great success. Subsurface drip irrigation substantially increased pepper yields, decreased water consumption, and provides greatly improved profits. Due to limited availability of public data, our research could only examine 50 of the top exporting commodities in California. According to the California Department of Food and Agriculture, there are 305 known crops produced in the region. Additional research should be extended to assess the exported water of the remaining 255 crops and to evaluate the overall effects of evapotranspiration for all crops commercially produced in California. Since many regions of California are classified as arid and semi-arid areas sharing similar climate conditions to those of other Mediterranean countries, such as Chile, Spain, France, Italy, South Africa and portions of Australia according to UNCCD. The information presented in our research model can be used as a baseline for reference for calculating exported water of other crops grown in similar climate conditions. Previous study by Nguyen et al., 2015 reported that groundwater pumping consumes approximately 1.5 x 104 GWh yr-1 , making the energy requirement for groundwater irrigation the largest contributor in the food production process. As shown from the results of our calculations, the majority of exported water was in the form of evapotranspiration induced by crop irrigation. Thus, it warrants that further research be conducted to examine the energy being exported as a result of induced evapotranspiration beyond the energy requires to irrigate. This research will shed light on the overall energy consumption in the entire food production process including energy expended within a spatial domain and the exported quantity induced via evapotranspiration. One area of research which has not been conducted is the effects of positive feedback mechanism of the overall exported energy of crops as a result of induced evapotranspiration. Future research should be extended to cover all remaining crops commercially produced in California. The outcomes of this model can be extended to quantify the overall exported energy from irrigation that is lost by induced evapotranspiration to that of the energy consumptions from other sectors of the California economy. The results of this future study will help close the loop on the life-cycle energy consumption analysis for California agriculture industry. Maximizing agricultural crop yield is an important goal for several reasons. First, a growing worldwide population will generate increased demand for agricultural resources. Since expanding the land area devoted to agriculture is often unfeasible, or would involve the destruction of sensitive landscapes such as forests and wetlands, the only way to meet this demand will be to increase the crop yield generated from existing farmland. Second, there are substantial economic incentives for profit-seeking farmers to maximize the yield of their crops, especially given the low profit margins typical of commercial agriculture.

We used Geographic Information System software to geocode the new addresses and obtain coordinates

There are no biomarkers available to assess human exposure to fumigants in epidemiologic studies . Residential proximity to fumigant use is currently the best method to characterize potential exposure to fumigants. California has maintained a Pesticide Use Reporting system which requires commercial growers to report all agricultural pesticide use since 1990 . A study using PUR data showed that methyl bromide use within ~8 km radius around monitoring sites explained 95% of the variance in methyl bromide air concentrations, indicating a direct relationship between nearby agricultural use and potential community exposure . In the present study, we investigate associations of residential proximity to agricultural fumigant usage during pregnancy and childhood with respiratory symptoms and pulmonary function in 7-year-old children participating in the Center for the Health Assessment of Mothers and Children of Salinas , a longitudinal birth cohort study of primarily low-income Latino farm worker families living in the agricultural community of the Salinas Valley, California. We enrolled 601 pregnant women in the CHAMACOS study between October 1999 and October 2000. Women were eligible for the study if they were ≥18 years of age, <20 weeks gestation, planning to deliver at the county hospital, English or Spanish speaking,square plant pot and eligible for low-income health insurance . We followed the women through delivery of 537 live-born children. Research protocols were approved by The University of California, Berkeley, Committee for the Protection of Human Subjects. We obtained written informed consent from the mothers and children’s oral assent at age 7. Information on respiratory symptoms and use of asthma medication was available for 347 children at age 7.

Spirometry was performed by 279 of these 7-year-olds. We excluded participants from the prenatal analyses for whom we had residential history information for less than 80% of their pregnancy. We excluded participants from the postnatal analyses for whom we had residential history information for less than 80% of the child’s lifetime from birth to the date of the 7 year assessment. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 257 children and postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 276 children for whom we obtained details of prescribed asthma medications and respiratory symptoms. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 229, 208, and 208 children for whom we had FEV1, FVC and FEF25–75 measurements, respectively. Postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 212, 193, and 193 children with FEV1, FVC and FEF25–75 measurements, respectively. A total of 294 participants were included in either the prenatal or postnatal analyses. Participants included in this analysis did not differ significantly from the original full cohort on most attributes, including maternal asthma, maternal education, marital status, poverty category, and child’s birth weight. However, mothers of children included in the present study were slightly older and more likely to be Latino than those from the initial cohort. Women were interviewed twice during pregnancy , following delivery, and when their children were 0.5, 1, 2, 3.5, 5, and 7 years old. Information from prenatal and delivery medical records was abstracted by a registered nurse. Home visits were conducted by trained personnel during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5-years old. At the 7-year-old visit, mothers were interviewed about their children’s respiratory symptoms, using questions adapted from the International Study of Asthma and Allergies in Childhood questionnaire . Additionally, mothers were asked whether the child had been prescribed any medication for asthma or wheezing/whistling, or tightness in the chest. We defined respiratory symptoms as a binary outcome based on a positive response at the 7- year-old visit to any of the following during the previous 12 months: wheezing or whistling in the chest; wheezing, whistling, or shortness of breath so severe that the child could not finish saying a sentence; trouble going to sleep or being awakened from sleep because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold; or having to stop running or playing active games because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold. In addition, a child was included as having respiratory symptoms if the mother reported use of asthma medications, even in the absence of the above symptoms.

Latitude and longitude coordinates of participants’ homes were collected during home visits during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5 years old using a handheld Global Positioning System unit . At the 7-year visit, mothers were asked if the family had moved since the 5-year visit, and if so, the new address was recorded. Residential mobility was common in the study population. We estimated the use of agricultural fumigants near each child’s residence using a GIS based on the location of each child’s residence and the Pesticide Use Report data . Mandatory reporting of all agricultural pesticide applications is required in California, including the active ingredient, quantity applied, acres treated, crop treated, and date and location within 1-square-mile sections defined by the Public Land Survey System . Before analysis, the PUR data were edited to correct for likely outliers with unusually high application rates using previously described methods . We computed nearby fumigant use applied within each buffer distance) for combinations of distance from the residence and time periods . The range of distances best captured the spatial scale that most strongly correlated with concentrations of methyl bromide and 1,3-DCP in air . We weighted fumigant use near homes based on the proportion of each square-mile PLSS that was within each buffer surrounding a residence. To account for the potential downwind transport of fumigants from the application site, we obtained data on wind direction from the closest meteorological station . We calculated wind frequency using the proportion of time that the wind blew from each of eight directions during the week after the fumigant application to capture the peak time of fumigant emissions from treated fields . We determined the direction of each PLSS section centroid relative to residences and weighted fumigant use in a section according to the percentage of time that the wind blew from that direction for the week after application.

We summed fumigant use over pregnancy , from birth to the 7-year visit and for the year prior to the 7-year visit yielding estimates of the wind-weighted amount of each fumigant applied within each buffer distance and time period around the corresponding residences for each child. We log10 transformed continuous fumigant use variables to reduce heteroscedasticity and the influence of outliers, and to improve the fit of the models. We used logistic regression models to estimate odds ratios of respiratory symptoms and/or asthma medication use with residential proximity to fumigant use. Our primary outcome was respiratory symptoms defined as positive if during the previous 12 months the mother reported for her child any respiratory symptoms or the use of asthma medications, even in the absence of such symptoms . We also examined asthma medication use alone. The continuous lung function measurements were approximately normally distributed,plastic potting pots therefore we used linear regression models to estimate the associations with residential proximity to fumigant use. We estimated the associations between the highest spirometric measures for children who had one, two or three maneuvers. We fit separate regression models for each combination of outcome, fumigant, time period, and buffer distance. We selected covariates a priori based on our previous studies of respiratory symptoms and respiratory function in this cohort. For logistic regression models of respiratory symptoms and asthma medication use, we included maternal smoking during pregnancy and signs of moderate or extensive mold noted at either home visit . We also included season of birth to control for other potential exposures that might play a causal role in respiratory disease , pollen , dryness , and mold. We defined the seasons of birth as follows: pollen , dry , mold based on measured pollen and mold counts during the years the children were born . In addition, we controlled for allergy using a proxy variable: runny nose without a cold in the previous 12 months reported at age 7. Because allergy could be on the causal pathway, we also re-ran all models without adjusting for allergy. Results were similar and therefore we only present models controlling for allergy. Additionally, for spirometry analyses only, we adjusted for the technician performing the test, and child’s age, sex and height. We included household food insecurity score during the previous 12 months , breastfeeding duration , and whether furry pets were in the home at the 7 year visit to control for other factors related to lung function. We also adjusted for mean daily particulate matter concentrations with aerodynamic diameter ≤ 2.5 µm during the first 3 months of life and whether the home was located ≤150m from a highway in first year of life determined using GIS, to control for air pollution exposures related to lung function. We calculated average PM2.5 concentration in the first 3 months of life using data from the Monterey Unified Air Pollution Control District air monitoring station.

In all lung function models of postnatal fumigant use, we included prenatal use of that fumigant as a confounder. To test for non-linearity, we used generalized additive models with three-degrees of-freedom cubic spline functions including all the covariates included in the final lung function models. None of the digression from linearity tests were significant ; therefore, we expressed fumigant use on the continuous log10 scale in multi-variable linear regression models. Regression coefficients represent the mean change in lung function for each 10-fold increase in wind-weighted fumigant use. We conducted sensitivity analyses to verify the robustness and consistency of our findings. We included other estimates of pesticide exposure in our models that have been related to respiratory symptoms or lung function in previous analyses of the CHAMACOS cohort. Specifically, we included child urinary concentrations of dialkylphosphate metabolites , a non-specific biomarker of organophosphate pesticide exposure using the area under the curve calculated from samples collected at 6-months, 1, 2, 3.5 and 5 years of age . We also included agricultural sulfur use within 1-km of residences during the year prior to lung function measurement . We used similar methods as described above for fumigants to calculate wind-weighted sulfur use, except with a 1-km buffer and the proportion of time that the wind blew from each of eight directions during the previous year. The inclusion of these two pesticide exposures reduced our study population with complete data for respiratory symptoms and lung function . Previous studies have observed an increased risk of respiratory symptoms and asthma with higher levels of p, p’– dichlorodiphenyltrichloroethylene or p, p’-dichlorodiphenyldichloro-ethylene measured in cord blood . As a sensitivity analysis, we included log10- transformed lipid-adjusted concentrations of DDT and DDE measured in prenatal maternal blood samples . We also used Poisson regression to calculate adjusted risk ratios for respiratory symptoms and asthma medication use for comparison with the ORs estimated using logistic regression because ORs can overestimate risk in cohort studies . In additional analyses of spirometry outcomes, we also excluded those children who reported using any prescribed medication for asthma, wheezing, or tightness in the chest during the last 12 months to investigate whether medication use may have altered spirometry results. We ran models including only those children with at least two acceptable reproducible maneuvers . We ran all models excluding outliers identified with studentized residuals greater than three. We assessed whether asthma medication or child allergies modified the relationship between lung function and fumigant use by creating interaction terms and running stratified models. To assess potential selection bias due to loss to follow-up, we ran regression models that included stabilized inverse probability weights . We determined the weights using multiple logistic regression with inclusion as the outcome and independent demographic variables as the predictors. Data were analyzed with Stata and R . We set statistical significance at p<0.05 for all analyses, but since we evaluated many combinations of outcomes, fumigants, distances and time periods we assessed adjustment for multiple comparisons using the Benjamini-Hochberg false discovery rate at p<0.05 . Most mothers were born in Mexico , below age 30 at time of delivery , and married or living as married at the time of study enrollment . Nearly all mothers did not smoke during pregnancy.

We measured changes in total distance moved and photomotor response from behavioral assays

We initiated all acute exposure tests within 24 h of surface water collection. Based on high invertebrate mortality previously observed in water from two of the sites, we made a dilution series of our water samples to capture a wider range of toxic effects including mortality and swimming behavior . For before first flush sampling, we used a dilution series of surface water concentrations—100%, 60%, 35%, 20%, and 12%—in order to evaluate the potential for a wide range of toxicological outcomes. We thoroughly mixed ambient surface water samples by agitation immediately before creating the dilutions in order to homogenize the turbidity levels between dilutions. To create the dilution series, we added control water to ambient surface water to achieve each desired concentration. We repeated this procedure at the 48 h point when performing an 80% water change on all treatment groups. For after first flush sampling, we used a broader dilution series—100%, 30%, 20%, 12%, and 6%—in anticipation of higher chemical concentrations based on previous studies. We tested temperature, total alkalinity, hardness, pH, and dissolved oxygen in situ using a YSI EXO1 multi-parameter water quality sonde at both test initiation and 48 h to ensure that the water remained within the acceptable ranges for D. magna. We chose exposure concentrations of CHL and IMI to mimic environmentally relevant concentrations found in monitored agricultural waterways, as well as experimental EC50/LC50 values. For both CHL and IMI, the low and high concentrations were 1.0 µg/L and 5.0 µg/L, respectively. We purchased chemicals from Accu Standard . We dissolved CHL in pesticide grade acetone to make chemical stock solutions, subsequently diluting it with EPA synthetic control water to a final concentration of 0.1 mL/L in exposure water. Due to its solubility, no solvent was needed to make an IMI stock solution. To account for this difference, we compared CHL treatment data to an acetone solvent control,square pot and IMI to the EPA synthetic control water. The California Department of Food and Agriculture Center for Analytical Chemistry analyzed these chemical stock solutions via LC-MS MS.

Chemical analysis of field water was conducted at the Center for Analytical Chemistry, California Department of Food and Agriculture using multi-residue liquid chromatography tandem mass spectrometry and gas chromatography– mass spectrometry methods. Chemicals were analyzed following procedures described in the Monitoring Prioritization Model as mentioned on the CPDR’s website. Chlorantraniliprole and IMI stock solutions were also analyzed to confirm exposure concentrations. The method detection limit and reporting limit for each analyte are listed in Tables S3–S6. Laboratory QA/QC followed CDPR guidelines provided in the Standard Operating Procedure CDPR SOP QAQC012.00. Extractions included laboratory blanks and matrix spikes. We performed behavioral assays at the 96 h time points for both the chemical exposures and for the field sampling exposures. We designed behavioral assays using Ethovision XT™ software , and adjusted the video settings to maximize the software’s detection of D. magna. We gently transferred organisms from test vessels into randomized wells in a non-treated 24 round-well cell culture plate containing 1 mL of control water at 20 C. We then left them to habituate for at least one hour before moving them to our behavioral assay set up for an additional five-minute acclimation period. The DanioVision™ Observation Chamber had a temperature-controlled water flow-through system, allowing us to keep organisms at optimal temperature throughout the assay. Our CCD video camera recorded the entire plate in which the organisms were held throughout the assay, so in this case 24 individuals were assessed at the same time. Using the Ethovision XT™ software, we then analyzed each video frame identifying the location of the organisms at each time point. Calculations were carried out to produce quantified measurements of the organisms’ behavior including both total distance moved and velocity. This assessment of horizontal movement over time, measured as total distance moved, is useful when trying to determine the changes in locomotor ability of organisms after exposure to pesticides. This system also allows us to control the dark:light cycle throughout the assay in order to measure endpoints related to a light stimulus, including photomotor response. We measured significant changes in photomotor responses as the change in mean distance traveled between the last 1 min of a light photo period and the first minute of the dark photoperiod as described in Steele et al. .

We checked data sets for normality using a Shapiro–Wilk test and applied log transformations before statistical analysis. We used a repeated measure ANOVA to analyze the effects over the light period. Statistical tests were defined by treatment as between-subject factors, and time as the within-subject factor. We applied Dunnett’s multiple comparison test for post hoc evaluation. Data are represented as mean ± standard error of the mean . We exported summary statistics from Ethovision XT using 1 min time bins for each treatment and analyzed the data in GraphPad Prism, version 9.0 . We determined significance of mortality data by Analysis of Variance followed by Dunnett’s test for multiple comparisons one-way analysis using GraphPad Prism, version 8.0. To measure the photomotor response of the organisms, we calculated the difference in distance moved between the last minute of the dark period and the first minute of the subsequent light period for each individual. These data sets were then log transformed and analyzed in GraphPad Prism using a one-way ANOVA with a Tukey’s Post Hoc test of multiple comparisons.Chemicals detected in the water samples collected in September are shown in Table S1, and are described in further detail in Stinson et al. 2021, a parallel study. In brief, of 47 pesticides analyzed, 17 were detected in our surface water samples, and each site contained a minimum of 7 target pesticides. Chlorantraniliprole was detected at all sites at concentrations below the acute lethality benchmarks for invertebrate species exposure . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure , and above the acute invertebrate level at Alisal Creek . Neonicotinoids were detected at all sites. Organophosphates were detected at two of the sites: Quail Creek and Alisal Creek. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, lambda-cyhalothrin, and bifenthrin . Several other chemical detections exceeded EPA benchmark values. Notably, methomyl was detected at Quail Creek at nearly three times the limit for chronic fish exposure ,blueberries in containers and above the EPA benchmark for chronic invertebrate exposure at all sites. Overall, Salinas River contained the smallest total number of chemicals at the lowest concentrations of the three sites we examined. Chemicals detected in water samples collected in November are shown in Table S2. Of 47 pesticides analyzed, 27 were detected in our surface water samples, and each site contained a minimum of 21 target pesticides.

Chlorantraniliprole was detected at all sites below the lowest benchmark . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure at Salinas River , Alisal Creek , and Quail Creek . Neonicotinoids and organophosphates were detected at all sites. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, cyfluthrin, lambda-cyhalothrin, bifenthrin, fenpropathrin, esfenvalerate . Overall, Salinas River contained the smallest total number of pesticides at the lowest concentrations of the three sites we examined. Repeated measures ANOVA showed there were no time-by-treatment interactions, but there were significant effects of treatment, on locomotor activity . Daphnia magna exposed to 35% and 20% surface water from Alisal Creek exhibited significantly hypoactivity compared to the control group under light conditions . Additionally, D. magna exposed to 20% surface water from Alisal Creek exhibited significant hypoactivity compared to the control group under dark conditions of the behavioral assay. Daphnia magna exposed to the highest concentration of surface water from Alisal Creek tested were significantly hypoactive during the last 5 min of the exposure period. Organisms exposed to all concentrations of surface water from Salinas River were hyperactive under light conditions with the two highest concentrations showing the greatest hyperactivity when compared to controls . There was no difference in total distance moved between organisms exposed to the Salinas River dilution series and the control group individuals in the dark period. The photomotor response for organisms exposed to surface water from both Alisal Creek and Salinas River followed a clear log-linear dose-response curve . Both the control and solvent control groups exhibited a reduction in movement consistent with a freeze response. Overall, Alisal Creek exposed organisms showed a greater magnitude of change than Salinas River exposed organisms. There were significant changes in photomotor response across all treatment groups, though responses differed between sampling sites. Daphnia magna exposed to water samples from Quail Creek demonstrated an inverse dose response pattern, where exposure to the lowest dilution gave the most significant change in photomotor response, and exposure to the highest dilution was not significantly different from control groups . The Alisal Creek treatment groups exhibited a non-monotonic dose response, with organisms exposed to the medium dosage having little to no response to light stimulus. The low dilution had a significantly lessened photomotor response pattern, and the highest dilution was not significantly different from the control group . Daphnia magna exposed to all concentrations of surface water from Salinas River had significantly altered photomotor responses as compared to controls. Organisms exposed to undiluted water samples from Salinas River demonstrated an opposite startle response of equal magnitude to the control’s freeze response.Physicochemical parameters for the exposure period are listed in Table S9. Following 96 h exposures, we measured no significant mortality in D. magna after exposure to CHL or IMI, at either the high or low concentrations following the 96 h acute exposure period .

Repeated measures ANOVA showed there were no timeby-treatment interactions for any experiment, but there were significant effects of both time and treatment, individually, on locomotor activity in the CHL/IMI data sets . Both the control and solvent control groups exhibited a large photomotor response consistent with freezing . After exposure to the low level of CHL, D. magna showed hypoactivity under dark conditions . For D. magna exposed to both low and high treatments of IMI, we saw significant hypoactivity during the entire behavior assay period, under both light and dark conditions . Exposure to mixtures of CHL and IMI resulted in divergent total distance moved measurements under both light and dark conditions. Individuals from the low CHL/low IMI treatment group were hypoactive in dark conditions. In contrast with the single chemical exposures, individuals from the high CHL/low IMI treatment group were hyperactive under light conditions. We measured significant changes in photomotor responses between the last 1 min of a light photoperiod and the first minute of the dark photoperiod . The change in total distance moved during the dark:light transition is shown in Figure 3D–F. For both CHL treatments, organisms exhibited no response to light stimulus , representing a nearly 60-fold difference in response from the control group. Organisms exposed to low IMI had an inverse response to light stimulus when compared to the control group, increasing their total distance moved in response to light stimulus. Organisms exposed to high IMI exhibited a reduction in their average total distance moved, but this response was fivefold smaller than controls. Mixtures of CHL and IMI resulted in the most divergent photomotor response, when compared with controls. Daphnia magna in all binary treatment groups, with the exception of the low CHL/low IMI group, showed an inverse photomotor response from controls. Surface water from all sites contained CHL and IMI as components of complex mixtures from surface water at all sites, both before and after a first flush event. Several chemicals detected from these sites are known to have sublethal effects on D. magna, including IMI, CHL, bifenthrin, clothianidin, malathion, methomyl, and lambda-cyhalothrin . The changes in pesticide composition and concentration between the sampling dates concurred with results from previous chemical analyses in this region. Pesticides of concern including CHL and IMI were detected at higher concentrations after the first flush event . A study examining first flush toxicity in California found that the concentration of pollutants was between 1.2 and 20 times higher at the start of the rain season versus the end. Interestingly, the sampling site with the highest increase in concentration after first flush, for several pesticides of concern, was the Salinas River site.

Discharges from agricultural non-point sources are inherently difficult to monitor because they are diffuse in nature

Agencies that supported the survey included the Monterey County Farm Bureau, the University of California Extension, the Agriculture and Land Based Training Association, and the Agricultural Water Quality Agency. Each agency requested results from the survey, as well as a presentation to their organization. Additionally, I plan on distributing a two-page summary of results to all growers who participated in the survey. Another part of this doctoral research that helped forge partnerships is through my work on Chapter 5. Data analysis in this chapter included spatial analysis of regional pesticide use over the past 13 years. In designing this chapter, I met with third-party monitoring agencies, G.I.S. technicians, and faculty members to ensure the highest quality data was used and that the research results would be of use to growers and policymakers. The spatial analysis of several pesticides known to be sources of water column and sediment toxicity in the region show the impacts, both negative and positive, of the primary regional agricultural water quality mandate that specifically targets two organophosphate pesticides. Results have already been distributed to Regional Water Quality Control Board staff members, who have passed them along to other networks and agencies. Research results from this dissertation have been and will continue to be shared with academic audiences, agricultural operators, policymakers, water quality agencies, and the general public in peer-reviewed publications, conference proceedings, reports, magazine articles, poster presentations, and oral presentations. Links to all published research are posted on my graduate student website. Throughout the data collection process, I maintained thorough records in both my notebooks and on electronic devices, and all stored electronic data have been backed up and preserved. Records of all interviews, survey questions and responses, datasets,large plastic pots and methodologies were retained to ensure reproducibility. I received exemption from IRB Review for both the interviews as well as the survey conducted in this research.

Agricultural non-point source pollution—runoff and leaching into nearby water bodies from nutrients, pesticides and soil sediments—is the chief impediment to achieving water quality objectives throughout the U.S. and Europe. Consequentially, policymakers cannot employ the old standbys used to regulate point sources of pollution, which are emitted from an identifiable pipe or outfall. Instead, regional, state, and federal agencies have typically relied on voluntary, incentive-based approaches to manage non-point source pollution . Such approaches have resulted in unsuccessful agriculture NPS control. In the U.S., agricultural pollution is the leading cause of pollution to rivers and lakes . And in Europe, agriculture contributes 50-80% of the total nitrogen and phosphorus loading to the region’s fresh waters and sea waters . The inadequacies of current approaches have triggered academic and regulatory discussions about how to proceed with abating non-point sources . These issues pose particularly challenging questions about appropriate regulatory tools, jurisdictional boundaries, funding needs, monitoring requirements, pollution permit allocations and stakeholder collaboration. Drawing from environmental policy and environmental economics literature as well as case studies from the U.S. and Europe, the aim of this chapter is to assess agricultural NPS pollution management approaches and the factors that drive or impede their implementation and enforcement. The E.U.’s recent Water Framework Directive presents an opportunity to build on lessons of the earlier-promulgated 1972 U.S. Clean Water Act, while the U.S. can benefit from the implementation and enforcement of effective European water pollution controls. This research presents several policy tool frameworks to help characterize the widespread non-point source pollution problem in the U.S. and Europe, distinguishing its unique set of hurdles from other environmental policy problems.

Findings suggest that controlling numerous diffuse sources of agricultural pollution requires an integrated approach that utilizes river basin management and a mix of policy instruments. Additionally, this chapter finds that transitioning from voluntary mechanisms to more effective instruments based on measurable water quality performance relies predominantly on three factors: more robust quality monitoring data and models; local participation; and political will.Since the passage of revolutionary water quality policies in the 1970s, the U.S. and Europe have seen significant water quality improvements in point source discharges—defined as any discernible, confined and discrete conveyance. Over the past 40 years, industrial pollution and discharges of organic wastes from urban areas and publicly owned treatment facilities have dropped substantially, and dissolved oxygen levels have increased downstream from point source pollution. This success can largely be attributed to the use of a transformative technology-based command-and-control approach, which employs standards to control pollutants at the point of discharge, setting uniform limitations based on the “Best Available Technology” for a given industry. Technology-based effluent limits have been enshrined in both the 1972 U.S. Clean Water Act and various European environmental policies. The technology-based regulatory framework skillfully transformed water quality regulation for point sources into a remarkably more streamlined and simplified system with successful results; it unfortunately neglected the different and more difficult task of controlling non-point source pollution. Instead, individual states in the U.S. and Member States/river basins in Europe have been entrusted with the monumental task of NPS pollution control. The 1972 Clean Water Act and subsequent amendments largely shape present-day water quality policies . During the drafting of the CWA, non-point source pollution was not perceived as serious of a problem as point source pollution , and was only considered as an afterthought . Prior to 1972, the nation’s general approach to water pollution was disjointed and highly variable—analogous to non-point source pollution regulation today. Control mechanisms were decentralized, which resulted in each state developing its own method of protecting water quality.

While several states attempted to implement innovative water quality standards and discharge permits, the vast majority failed to improve water quality conditions. A fundamental weakness of relying on ambient standards was that states needed to prove which polluters impaired water quality and to what extent. This endeavor was extremely difficult given that the regulatory agencies possessed very little data about the location, volume, or composition of industrial discharges . Even if data were available, water agencies were often understaffed, under budgeted and had inadequate statutory authority. By the 1960s, many of the country’s rivers and streams had reached such abominable conditions that a growing population of frustrated U.S. citizens turned to the federal government for help. After years of delay and struggle, the U.S. was ready to formulate a comprehensive, unified regulatory structure, resulting in the 1972 Clean Water Act. The Act employed a command-and-control approach to implement technology-based standards,raspberry container enforced by National Pollution Discharge Elimination System permits . This approach, aimed at controlling pollutants at the point of discharge, set uniform limitations based on the best available technology pertaining to a particular industrial category. To implement and monitor performance, every point source was required to obtain a permit to discharge. Under this innovative system, enforcement officials need only compare the permitted numerical limits with the permittee’s discharge. Technology-based effluent limits have transformed U.S. water quality regulation into a remarkably more streamlined and simplified system with successful results . In addition to the technology standards, the drafters of the Clean Water Act held on to the historic water quality-based approach, despite its observed inadequacies. In an attempt to bridge the gap between discharges and clean water , dischargers were expected to comply with more stringent, individually-crafted effluent limitations based on water quality standards . This additional control tool is only implemented when technology-based controls are not sufficient in meeting beneficial uses. The process entails a few ostensibly straightforward steps: first, the state lists each impaired waterbody within its jurisdiction; second, the state designates a “beneficial use” for each waterbody; third, a Total Maximum Daily Load or “TMDL” for each waterbody is calculated based on the designated beneficial use; and finally, a portion of the load is allocated to each point or non-point source. However, the fundamental problem of TMDLs is that they must be translated into specific numerical discharge limitations for each source of pollution . This endeavor is often prohibitively expensive and extremely difficult given that every step of the regulatory process— from identifying and prioritizing impaired waterbodies to allocating emissions loads to measuring the program’s success— suffers from insufficient and poor quality information . Monitoring data are needed to assess, enforce, evaluate and use as a baseline for modeling efforts. The task of collecting these emissions data—identifying polluters that are difficult to pinpoint, monitoring discharges that are stochastic and virtually impossible to track, and connecting diffuse effluents back to their sources—is so problematic they have been stamped “unobservable” . The paucity of information is often the result of another, more tangible limitation when implementing non-point source pollution abatement mechanisms: budgetary and administrative constraints. Funding the monitoring efforts as well as the staff time to adequately oversee water pollution control efforts is an obligatory, but often missing component in water management programs. Also, a lack of enforcement in areas where management practices are not protecting water quality remains a widespread problem throughout agricultural NPS programs .

While individual river basins and states have varying water quality issues and employ slightly different approaches to abate non-point source pollution, each bears the burden of these similar hindrances. Clearly, the challenges and complexities of non-point source water pollution are not amenable to technology and emission-based policy tools historically used. Current discussions on how to proceed with non-point source pollution abatement strangely and sadly mirror those occurring over forty years ago. In describing the difficulty of implementing water quality standards in the 1960s, Andreen presents several questions still debated today: How should regulators allocate the capacity of a stream to a multitude of diffuse dischargers? Should the allocations be recalculated every time there is a new or expanded discharge? What should be the boundaries of a receiving waterbody—an entire river system or should each tributary be considered separately? Likewise, Houck describes the current state of U.S. non-point source pollution policy as: “slid[ing] back into the maw of a program that Congress all but rejected in 1972, among other things, its uncertain science and elaborate indirection.” Similar to the U.S., the first surge of European water legislation began in the 1970s. This “first wave” was characterized by seven different Directives, which were initiated by individual Member States with little coordination with the larger E.U. community . During the late 1990s, mounting criticism on the fragmented state of water policy drove the European Commission to draft a single framework to manage water issues . The resulting legislation, the Water Frameworks Directive , has been championed as “the most far-reaching piece of European environmental legislation to date” . Adopted in December 2000, the WFD replaced the seven prior “first wave” directives. Just as the Clean Water Act passes down authority to states in the U.S., the WFD gives each Member State and its river basins the same responsibility. Under this “second wave,” the WFD requires that River Basin Management Plans be established and updated every six years. The RBMPs specify how environmental and water quality standards will be met, allowing local authorities the flexibility to comply as they best see fit. The WFD mandates that all river basins must achieve “good” overall quality, and that more stringent standards need to be applied to a specific subset of water bodies used for drinking, bathing and protected areas. Two additional requirements of the WFD are economic analyses of water use and public participation in the policy implementation process. The E.U. chose management at the river basin level, a hydrological and geographical unit, rather than political boundaries, to encourage a more integrated approach to solving water quality problems . Another distinguishing aspect of the WFD is its “combined approach,” which guides Member States’ choice of policy tools. Similar to the U.S. CWA approach, technology controls based on Emissions Limit Values, such as those embedded in the previous E.U. Integrated Pollution Prevention and Control Directive , are implemented first. The IPPC works similarly to the U.S. NPDES permit system , requiring all major industrial dischargers to obtain a permit and comply with specific discharge requirements. If these emissions and technology-based instruments are not sufficient in meeting water standards, then Environmental Quality Standards are employed. The Water Framework Directive provides opportunities and challenges for all actors involved—Member States, European Commission, and candidate countries .

Correlated measurements of the same task may be solved using a Bayesian interpretation as well

Cell growth / viability assays are chemical indicators that correlate with viable cell number such as metabolism or DNA / nuclei count and can also be used to quantify the effect of media on cells. In chapter 5 we conducted many experiments with different assays and show the inter-assay correlations in Figure 1.3. Notice no assay is perfectly correlated with any other assay because they are collected with different methodologies and fundamentally measure different physical phenomena. For example, AlamarBlue measures the activity of the metabolism in the population of cells, so optimizing a media based on this metric might end up simply increasing the metabolic activity of the cells rather than their overall number. As some of these measurements can be destructive / toxic to the cells , continuous measurements to collect data on the change in growth can be tedious. Collecting high-quality growth curves over time may be accomplished using image segmentation and automatic counting techniques. Using fluorescent stained cells and images, segmentation can be done using algorithms like those discussed. Cells may even be classified based on their morphology dynamically if enough training data is collected to create a generalizable machine learning model. Successfully quantifying the ability of media to grow cells forms the backbone of the novelty of this dissertation. The primary means by which this dissertation will improve cell culture media is through the application of various experimental optimization methods, often called design-of-experiments . The purpose of DOEs are to determine the best set of conditions to optimize some output by sampling a process for sets of conditions in an optimal manner. If an experiment is time / resource inefficient, then optimizing the conditions of a system may prove tedious. For example,gallon pot doing experiments at the lower and upper bounds of a 30-dimensional medium like DMEM requires 2 30 109 experiments. This militates for methods that can optimize experimental conditions and explore the design space in as few experiments as possible. DOEs where samples are located throughout the design space to maximize their spread and diversity according to some distribution are called space-filling designs.

The most popular method is the Latin hypercube , which are particularly useful for initializing training data for models and for sensitivity analysis. Maximin designs, where some minimum distance metric is maximized for a set of experiments, can also allow for diversity in samples, with the disadvantage being that in high dimensional systems the designs tend to be pushed to the upper and lower bounds. Thus, we may prefer a Latin hypercube design for culture media optimization because media design spaces may be >30 factors large. Uniform random samples, Sobol sequences, and maximum entropy filling designs, all with varying degrees of ease-of-implementation and space-filling properties, also may be used. It cannot be known a priori how many sampling points are needed to successfully model and optimize a design space because it is dependent on the number of components in the media system, degree of non-linearity, and amount of noise expected in the response. Because of these limitations, DOE methods that sequentially sample the design space have gained traction, which will be talked about in the next section. A more data-efficient DOE is to split up individual designs into sequences and use old experiments to inform the new experiments in a campaign. One sequential approach is to use derivative-free optimizers where only function evaluations y are used to sample new designs x. DFOs are popular because they are easy to implement and understand, as they do not require gradients. They are also useful for global optimization problems because they usually have mechanisms to explore the design space to avoid getting stuck in local optima. The genetic algorithm is a common DFO where a selection and mutation operator is used to find more fit combinations of genes . In Figure 1.7, notice the GA was able to locate the optimal region of both problems regardless of the degree of multi-modality. [9] used a GA to optimize media for rifamycin B fermentation in bacteria where the HPLC titer at the end of 9 days was used to select high performing media combinations from nine metabolites for the next batch of experiments. They allowed for a 1% chance of mutation of each experiment and componentto allow for global search.

They also discovered that the response space was multi-modal and had interactions between components, which confirmed the need for global optimization of fermentation and bio-processing problems discusses 17 cases in which GAs have improved media for different organisms for chemical fermentation often by > 50% yields for problems of > 10 media components. Particle swarm optimization is a population-based method that optimizes systems sequentially based on varying x according to a velocity vector v. At the tth iteration of the algorithm a particle x will have the velocity update rule vt+1 = wvt +c1r1 +c2r2 for random numbers r1, r2, coefficients w, c1, c2 . c1 and c2 parameterize the exploration-exploitation trade-off, similar to the mutation rate in the GA. w represents the fraction of velocity saved for the next iteration t + 1. To implement this one merely computes xt+1 = xt + vt for a large population of particles over time as the population gradually gravitates to the optimal designs. The Nelder-Mead simplex method, wherein a group of points is moved closer to better values via expansion and contraction steps, is also a popular DFO method. Nelder-Mead is a local optimizer and may be hybridized with other global DFO methods to improve convergence. While DFOs don’t require gradient calculations and can usually optimize complex multi-modal optimization problems , they require 100’s, if not 1000’s, of experiments so are limited to fast growing culture systems or computer experiments where experiments are somewhat costless. The most powerful experimental optimization technique is arguably the model-based sequential DOE, in which a response-surface model of the relationship between the input x and output y data is trained, and new samples are constructed based on the predictions of the trained model. The newly collected data is then fed back into the model and used to generate another sequence of samples discusses using combinations of screening DOEs and polynomial RSM to optimize conditions for the fermentation of metabolites such as chitinase, γ-glutamic acid, polysaccharides, chlortetracycline and tetracycline among 20 other metabolites from various organisms. This demonstrates the usefulness of RSMs for fermentation and culture optimization.

The primary limitation of polynomial RSMs is their inability to accurately model many factors at a time or systems with significant nonlinearity. Due to their generalizability to modeling different response surfaces,gallon nursery pot neural networks have been used to optimize bioreactor cultures and multi-objective protein storage conditions. Radial basis function have been used to optimize yeast and C2C12 mammalian muscle cell culture growth media. Decision trees and neighborhood analysis have been used to optimize media for antibiotics and bacteria fermentation. An example of an RSM can be seen in Figure 1.8 where a radial basis function maps the input / output relationship in a nonlinear system, then a GA finds new optimal experiments. Over time the predicted contour looks similar to the true function. While these RSMs tend to be more generalizable compared to polynomial and linear models, low-data experimental campaigns common in fermentation and cell culture often obscure the differences between modeling techniques. Additionally, many of these RSM approaches do not take into account prior information about the system to speed up optimization. Due to the noisiness of fermentation data it may be useful to consider noise in our process models. Known or unknown constraints can be incorporated into GPs as well. For example, a known constraint might be that growth must exceed some minimum value. An unknown constraint might be the existence of excessive foaming in bioreactors, which may be learned from data, but is generally not known ahead of time. Multiple objectives, some of which may compete against one another, can be modeled and optimized using GPs and correlations between tasks may be considered. By correlating measurements, fewer total experiments are often needed. Multi-objective versions of acquisition functions α such as maxvalue entropy search and hypervolume improvement exist to turn these GP predictions into a score for a variety of objectives. Fermentation and cell culture systems are often subject to growth vs cost trade-offs so multi-objective Bayesian methods are useful here. Because most bio-processing experiments can be done using multiple bioreactors or cell culture plates, designing multiple optimal experiments at a time is often necessary shows how, using monte-carlo samples of the GP model, arbitrary numbers of experiments can be designed simultaneously. Knowledge that systems may exhibit separate but interacting local and global responses may militate for additive GPs.. Experimenters with access to separate computer simulations or algebraic process models may pose their GPs as composites of deterministic or other modeled functions and speed up optimization. Bayesian models may even fuse historical data-sets together to estimate optimal model parameters with constrained uncertainty, and could perhaps be used for optimization as well . More closely related to cell culture media optimization, GPs have been used in a Bayesian optimization scheme to optimize C2C12 growth media for proliferation maximization and cost minimization in chapter 5 of this dissertation.This dissertation is divided into roughly two equal parts. The first part are comprised of the development of a radial basis function genetic algorithm sequential DOE scheme. It drew heavily on the work of, where a sequential DOE technique was developed on the principle of local random search in areas of high performing media.

This algorithm was also dynamic by converging on high performing results and selectively searching the design space when good results were not forthcoming. Additionally, previous work in our lab  provided the framework for a sequential DOE based on a truncated GA. This modified GA incorporates uncertainty in the optimal samples found by halting algorithm convergence proportional to the amount of clustering around an optima the GA finds. By hybridizing these two methods, a DOE algorithm called NNGA-DYCORS was developed that solved various computational optimization problems better than either method alone. It was used to optimize a 30-dimensional media for serum-containing C2C12 cell culture with the metric of growth being AlamarBlue reduction after 48 hrs of growth in 96 well plates . Cells were seeded at the same time, concentration, and from the same frozen innoculum so that all experiments were roughly the same. While it was successful at finding media that maximized this metric , the optimal medium did not grow as many cells over additional passages. To fix this underlying problem, multiple passages needed to be incorporated into the DOE process. This is a very time-consuming process as each passage takes multiple days, many more physical manipulations than simple chemical assays which introduces opportunities for contamination, and difficulty for manual experimentation. To solve this, chemical assays were supplemented with small amounts of manual multi-passage cell counts in a multi-information source Bayesian GP model which was used to successfully optimize a 14-dimensional serum-containing media for C2C12 cells . Due to the presence of multi-passage data, the final optimal medium grew cells robustly over four passages, provided nearly twice the number of cells at the end of each passage relative to the DMEM + 10% FBS control and traditional DOE method, and did so at nearly the same cost in terms of media components. In the final chapter the multi-information source GP model was extended to optimize a 26-dimensional serum-free media based on the Essential 8 media using a multi-objective metric that improves cell growth while minimizing medium cost. Using this Bayesian metric, a broad set of media samples along the trade-off curve of media quality and cost were found, showing that a designer can be given options in media optimization. In particular, one medium resulted in higher growth over five passages while the control and Essential 8 lagged. We identify two important future considerations for this work. First, the data collection process, which is the major innovation of this dissertation, needs to be made more robust by actually capturing the long-term growth dynamics of the cells. Fluorescent and bright field imaging, used to quantify the temporal and spatial changes of the cells, may improve over whole-well AlamarBlue and LIVE/DEAD stains by couting individual cells and collecting more fine-grained growth curves.

The United States and the EU differ in their philosophy and practice for the regulation of PMP products

The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bio-active proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions,drainage pot including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts.

The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].4 The revision will be implemented in steps and could facilitate the field based production of PMPs. In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. Many of these regulations focus on the use of food species as hosts. The major perceived risks of open-field cultivation are the contamination of the food/feed chain, and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA .

In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 . In Europe, bio-pharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process. This is because many of the DSP techniques are product-dependent and, therefore, similar regardless of the platform, including most of the DSP equipment, with which regulatory agencies are already familiar. Of course, the APIs themselves must be fully characterized and shown to meet designated criteria in their specification, but this applies to all products regardless of source.During a health emergency, such as the COVID-19 pandemic, regulatory agencies worldwide have re-assessed guidelines and restructured their requirements to enable the accelerated review of clinical study proposals, to facilitate clinical studies of safety and efficacy, and to expedite the manufacturing and deployment of re-purposed approved drugs as well as novel products .

These revised regulatory procedures could be implemented again in future emergency situations. It is also possible that some of the streamlined procedures that can expedite product development and regulatory review and approval will remain in place even in the absence of a health emergency, permanently eliminating certain redundancies and bureaucratic requirements. Changes in the United States and European regulatory processes are highlighted, with a cautionary note that these modified procedures are subject to constant review and revision to reflect an evolving public health situation.In the spring of 2020, the FDA established a special emergency program for candidate diagnostics, vaccines, and therapies for SARS-CoV-2 and COVID-19. The Coronavirus Treatment Acceleration Program 5 aims to utilize every available method to move new treatments to patients in need as quickly as possible, while simultaneously assessing the safety and efficacy of new modes of intervention. As of September 2020, CTAP was overseeing more than 300 active clinical trials for new treatments and was reviewing nearly 600 preclinical-stage programs for new medical interventions. Responding to pressure for procedural streamlining and rapid response, the FDA refocused staff priorities,drainage planter pot modified its guidelines to fit emergency situations, and achieved a remarkable set of benchmarks . In comparison to the review and response timelines described in the previous section, the FDA’s emergency response structure within CTAP is exemplary and, as noted, these changes have successfully enabled the rapid evaluation of hundreds of new diagnostics and candidate vaccine and therapeutic products.The European Medicines Agency has established initiatives for the provision of accelerated development support and evaluation procedures for COVID-19 treatments and vaccines. These initiatives generally follow the EMA Emergent Health Threats Plan published at the end of 2018 . Similar to FDA’s CTAP, EMA’s COVID-19 Pandemic Emergency Task Force aims to coordinate and enable fast regulatory action during the development, authorization, and safety monitoring of products or procedures intended for the treatment and prevention of COVID-19 . Collectively, this task force and its accessory committees are empowered to rapidly address emergency use requests . Although perhaps not as dramatic as the aspirational time reductions established by the FDA’s CTAP, the EMA’s refocusing of resources and shorter response times to accelerate the development and approval of emergency use products are nevertheless laudable. In the United Kingdom, the MHRA6 has also revised customary regulatory procedures to conform with COVID-19 emergency requirements by creating 6 MHRA regulatory flexibilities resulting from coronavirus .During a public health emergency, one can envision the preferential utilization of existing indoor manufacturing capacity, at least in the near term. Processes making use of indoor cultivation and conventional purification can be scrutinized more quickly by regulatory agencies due to their familiarity, resulting in shorter time-to-clinic and time-to-deployment periods. Although many, perhaps most, process operations will be familiar to regulators, there are some peculiarities of plant-based systems that differentiate them from conventional processes and, hence, require the satisfaction of additional criteria. Meeting these criteria is in no way insurmountable, as evidenced by the rapid planning and implementation of PMP programs for SARS-CoV-2/COVID-19 by PMP companies such as Medicago, iBio, and Kentucky Bio-processing.

During emergency situations when speed is critical, transient expression systems are more likely to be used than stable transgenic hosts, unless GM lines were developed in advance and can be activated on the basis of demand . The vectors used for transient expression in plants are non-pathogenic in mammalian hosts and environmentally containable if applied indoors, and by now they are well known to the regulatory agencies. Accordingly, transient expression systems have been deployed rapidly for the development of COVID-19 interventions. The vaccine space has shown great innovation and the World Health Organization has maintained a database of COVID-19 vaccines in development,8 including current efforts involving PMPs. For example, Medicago announced the development of its VLP-based vaccine against COVID-19 in March 2020, within 20 days of receiving the virus genome sequence, and initiated a Phase I safety and immunogenicity study in July.9 If successful, the company expects to commence Phase II/III pivotal trials by late 2020. Medicago is also developing therapeutic antibodies for patients infected with SARS-CoV-2, and this program is currently in preclinical development. Furthermore, iBio has announced the preclinical development of two SARS-CoV-2 vaccine candidates, one VLP and one subunit vaccine.10 Kentucky Bio-processing has announced the production and preclinical evaluation of a conjugate TMV-based vaccine and has requested regulatory authorization for a first in-human clinical study.These efforts required only a few months to reach these stages of development and are a testament to the rapid expression, prototyping, and production advantages offered by transient expression.The PMP vaccine candidates described above are all being developed by companies in North America. The rapid translation of PMPs from bench to clinic reflects the conformance of chemistry, manufacturing, and control procedures on one hand, and environmental safety and containment practices on the other, with existing regulatory statutes. This legislative system has distinct advantages over the European model, by offering a more flexible platform for discovery, optimization, and manufacturing. New products are not evaluated for compliance with GM legislation as they are in the EU and the United States but are judged on their own merits. In contrast, development programs in the EU face additional hurdles even when using well-known techniques and even additional scrutiny if new plant breeding technologies are used, such as the CRISPR/Cas9 system or zinc finger nucleases .Process validation in manufacturing is a necessary but resource intensive measure required for marketing authorization. Following the publication of the Guidance for Industry “Process Validation: General Principles and Practices,” and the EU’s revision of Annex 15 to Directive 2003/94/EC for medicinal products for human use and Directive 91/412/EEC for veterinary use, validation became a life-cycle process with three principal stages: process design, process qualification, and continuous process verification . During emergency situations, the regulatory agencies have authorized the concurrent validation of manufacturing processes, including design qualification , installation qualification , operational qualification , and performance qualification .

Size of household landholding is included in the model to explore the effects of scale on fertilizer use

To provide a more accurate assessment of the household and environmental factors associated with household use of inorganic fertilizer, we undertake econometric analysis to explore determinants of fertilizer adoption and use intensity. Limited dependent variables models are often used to evaluate farmers’ decision-making process concerning adoption of agricultural technologies. Those models are based on the assumption that farmers are faced with a choice between two alternatives and the choice depends upon identifiable characteristics . In adopting new agricultural technologies, the decision maker is also assumed to maximise expected utility from using a new technology subject to some constraints . In many cases a Probit or Logit model is specified to explain whether or not farmers adopt a given technology without considering the intensity of use of the technology. The Probit or Logit models cannot handle the case of adoption choices that have a continuous value range. This is the typical case for fertilizer adoption decisions where some farmers apply positive levels of fertilizer while others have zero application . Intensity of use is a very important aspect of technology adoption because it is not only the choice to use but also how much to apply that is often more important. The Tobit model of Tobin can be used to handle such a situation. However, the Tobit model attributes the censoring to a standard corner solution thereby imposing the assumption that non-adoption is attributable to economic factors alone . A generalization of the Tobit model overcomes this restrictive assumption by accounting for the possibility that nonadoption is due to non-economic factors as well. Originally formulated by Cragg ,drainage collection pot the double-hurdle model assumes that households make two sequential decisions with regard to adopting and intensity of use of a technology. Each hurdle is conditioned by the household’s socio-economic characteristics. In the double-hurdle model, a different latent variable is used to model each decision process.

The first hurdle is a sample selection equation estimated with a Probit model.It is important to first define what is meant by fertilizer adoption. For Probit estimation, a household is regarded as an adopter of fertilizer if it was found to be using any inorganic fertilizer. The dependent variable in this model is a binary choice variable which is 1 if a household used inorganic fertilizer and 0 if otherwise. For the second hurdle , fertilizer adoption becomes continuous and the dependent variable is the amount of fertilizer applied per acre of cultivated land by a household. There is no firm economic theory that dictates the choice of which explanatory variables to include in the double-hurdle model to explain technology adoption behaviour of farmers. Nevertheless, adoption of agricultural technologies is influenced by a number of interrelated components within the decision environment in which farmers operate. For instance, Feder et al. identified lack of credit, limited access to information, aversion to risk, inadequate farm size, insufficient human capital, tenure arrangements, absence of adequate farm equipment, chaotic supply of complimentary inputs and inappropriate transportation infrastructure as key constraints to rapid adoption of innovations in less developed countries. However, not all factors are equally important in different areas and for farmers with different socio-economic situations. In this section, we discuss the appropriateness of different variables considered in our model. The household characteristics deemed to influence fertilizer adoption in this study include household heads characteristics , household size and dependency ratio. The conventional approach to adoption study considers age to be negatively related to adoption based on the assumption that with age farmers become more conservative and less amenable to change. On the other hand, it is also argued that with age farmers gain more experience and acquaintance with new technologies and hence are expected to have higher ability to use new technologies more efficiently. Education enhances the allocative ability of decision makers by enabling them to think critically and use information sources efficiently. However, since fertilizer is not a new technology, education is not expected to have strong effects on its adoption.

The effect of household size on fertilizer adoption can be ambiguous. It can hinder the adoption in areas where farmers are very poor and the financial resources are used for other family commitments with little left for purchase of farm inputs. On the other hand, it can also be an incentive for fertilizer adoption as more agricultural output is required to meet the family food consumption needs . Institutional and infrastructural factors considered important in fertilizer adoption in this study include access to credit, farm size, presence of a cash crop, distance to fertilizer market, distance to extension service provider and distance to motorable road. The size of landholding is expected to be positively correlated with fertilizer adoption, as farmers with bigger landholding size are assumed to have the ability to purchase improved technologies and the capacity to bear risk if the technology fails . However, the well-documented tendency for management intensity to decline with scale in tropical Africa suggests that land size will be negatively correlated with the intensity of fertilizer use. Lack of access to cash or credit does significantly limit the adoption of fertilizer but the choice of appropriate variable to measure access to credit remains problematic. On a discussion on the limitations, challenges and opportunities for improving technology adoption using micro-studies, Doss outlines the different measures often used but cautions the inherent problems of these methods, especially their endogeneity.

Doss suggests that whether a farmer had ever received cash credit is a better measure of credit access than whether there is a source of credit available to the farmer. This study measures credit access by looking at whether a household received or did not receive any credit during a cropping year. The presence of a major cash crop 1 in the household is included in the model to capture the influence of commodity based inputs delivery systems in fertilizer adoption. In Kenya, commodities such as tea, coffee and sugar cane have inputs credit schemes for farmers. Because inputs markets are widely distributed, farmers face travel costs when they buy inputs. Since the volumes of fertilizer purchases by smallholder farmers are not high and the location of fertilizer market can be inconvenient,round plastic pot the cost of travelling to purchase fertilizer is probably fixed over the quantities purchased. The distance to fertilizer market is thus expected to affect decision on whether or not to use fertilizer, but not the intensity of use. Exposure to information reduces subjective uncertainty and, therefore, increases likelihood of adoption of new technologies . Various approaches have been used to capture information including: determining whether or not the farmer was visited by an extension agent in a given time; whether or not the farmer attended demonstration tests for new technologies by extension agents; and the number of times the farmer has participated in on-farm tests. Due to absence of such data for this study, we use distance to extension service provider to capture the influence of information on adoption. To explore the impact of infrastructure, which influences market access for both inputs and outputs, on fertilizer use, we include the distance to motorable road as a variable in the model. To measure the influence of agro ecological factors on fertilizer adoption, we include dummies for agro ecological zones. The high potential maize zone is used as the base. The Coastal, Eastern and Western lowlands and Marginal rain shadow receive less rainfall and are prone to prolonged and frequent dry spells compared to the Central and Western highlands, Western transitional and High potential maize zone. Agro ecology variables pick up variation in rainfall, soil quality, and production potential. These variables may also pick up variation unrelated to agricultural potential, such as infrastructure and availability of markets for inputs and outputs. A summary description of the explanatory variables used in the model is presented in Table 1.Generally, the proportion of sampled households using fertilizer rose from 64% in 1997 to 76% in 2007. However, these proportions vary considerably across agro ecological zones. The High Potential Maize Zone, Western Highlands and Central Highlands had the highest proportion of the households applying fertilizer. On the other hand, the proportion of households using fertilizer has remained relatively lower in the drier regions of Coastal Lowlands , Western Lowlands , Marginal Rain Shadow and Eastern Lowlands . A notable increase in the proportion of households using fertilizer in Western Transitional was observed; from 58% in 1997 to 88% in 2007.Trends in fertilizer use by cultivated land size are presented in Table 3. Landholding size is considered one of the indicators of wealth in Kenya. Two observations are made on the trends. First, across all the panel years the proportion of households adopting fertilizer increased with increasing cultivated land size. This may indicate that households with larger landholdings have greater ability to acquire and use fertilizer. Second, the proportion of households using fertilizer increased between 1997 and 2007 across all categories of cultivated land sizes.A more detailed analysis of fertilizer use on selected crops across the panel period is presented in Table 4. The number of households producing maize has remained high and about the same over the panel period, pointing to the importance attached to maize by the smallholder farmers.

The proportion of these households using fertilizer on maize consistently increased during the panel period from 57% in 1997 to 71% in 2007. On the contrary, the intensity of fertilizer application on maize has fluctuated between 55kg and 60kg per acre over the panel period. It is important to note that the application rates reported here are far below those recommended per acre for maize by the Kenya Agricultural Research Institute ; 50 kg of DAP and 60 kg of CAN, resulting to a total of 110 kg. The proportion of households applying fertilizer on coffee declined between 1997 and 2007 by 16%. Similarly, fertilizer application rate on coffee plummeted by 20% over the same period. A closer look reveals that the application rate consistently declined from 364 kg/acre in 2000 to 147 kg/acre in 2007, an average decline of 148% in a span of seven years. The gloomy picture in fertilizer use patterns on coffee can be attributed to two main factors: alleged mismanagement of coffee cooperatives, which are the main channels through which members receive their fertilizer; and the poor international coffee prices. Mismanagement in the cooperatives has made some farmers abandon coffee production while other farmers have opted to directly access fertilizers from private traders. This has made them disadvantaged in that they no longer access input credit facilities offered by the cooperatives as was the custom during the days when the cooperative movements were active and efficiently managed.With respect to tea, the fertilizer application rate has declined from 385 kg/acre in 1997 to 371 kg/acre in 2007. This decline is, however, marginal. The proportion of tea growing households using fertilizer on tea has, on the other hand, increased from 84% in 1997 to 98% in 2007. The fertilizer distribution system in the tea sector is the reason behind the impressive performance in fertilizer adoption on tea. The Kenya Tea Development Agency supplies fertilizer on credit to smallholder tea farmers and then deducts the cost plus interest from their deliveries of tea, which is sold by KTDA on behalf of the farmers. Fertilizer adoption on sugarcane over the panel period has showed an impressive increase. Households using fertilizer has grown from 29% in 1997 to 69% in 2007. However, the application rate has fluctuated over the study period. Increased fertilizer adoption in smallholder sugarcane farming can be attributed to provision on credit of fertilizer and other inputs to small holder cane farmers by the cooperatives to which the farmers belong. On the other hand, the dwindling fertilizer application rate can be attributed to inadequate supply of fertilizer by the cooperatives relative to farmers’ demand, or it may be as a result of farmers’ diversion of fertilizer acquired from the cooperatives from use on sugarcane to use on other crops. Ariga, et al., observed that some of the fertilizer acquired for intended use on the cash crops such as coffee and sugarcane under cooperative schemes is appropriate for use on maize and most horticultural crops as well, and there is likely to be some diversion of fertilizer targeted for use on sugarcane and coffee to food crops.

The minimum number of years of coverage required to receive a full pension was also increased

The parallels between the ways that farmers defend their policies and thwart unwanted policy changes at the domestic and EU levels can be made clear by looking at a case in which a national government attempted to impose new costs on their agricultural community without offering compensation. In 2013, Socialist French President François Hollande attempted to implement the so called “eco tax” first put forward by his conservative predecessor, Nicolas Sarkozy. The eco tax was intended to promote greener commercial transportation by imposing a tax on heavy vehicles. Under the plan, any vehicle over 3.5 tons would be taxed a flat rate of .13€ per kilometer traveled on 15,000 kilometers of roads included in the scheme. The government expected the tax to generate over €1 billion in revenue annually. The eco tax was slated to come into effect beginning 1 January 2014. The government’s proposal was immediately met with criticism from the main French farmers’ organization, the FNSEA. The organization described the tax as an “usine à gaz”, a situation where pipes are going everywhere and the system is overly complex. Through thus turn-of-phrase, the FNSEA meant to convey that the eco tax was a complicated procedure with little actual value or payoff. The FNSEA argued that the tax would place a significant burden on the agricultural community, particularly farmers in Brittany, who had suffered significantly from the financial crisis, and demanded that it be suspended immediately. Other critics raised concerns that Breton farmers might be driven out of business as a result of higher transportation costs. In addition to the concerns about its effects on Breton farmers, the FNSEA warned that French goods would pass through the tax gates more often than trucks carrying foreign goods, putting French farmers at a disadvantage compared to farmers’ goods arriving from abroad. Xavier Beulin,10 liter pot the leader of the FNSEA, promised immediate action against the proposal, directing members to target the “portiques” that were intended to scan the trucks as they passed underneath.

Beulin called on farmers from other parts of France, even from those areas without the tax scanners, to join the protests. The call for action was successful, as a wave of angry protests erupted in Brittany and across France. In Brittany, the heart of the demonstrations, protesters gathered in main town squares, many wearing red caps, or bonnets rouges in a reference to a 17th-century protest against a stamp tax proposed by Louis XIV. Some protestors threw stones, iron bars, and potted chrysanthemums at riot police, while others destroyed the electronic scanners intended to collect the fee from passing trucks. The protesters included not just farmers, but also the broader public, who were rallying to oppose taxes, with some also supporting the farmers specifically. In addition to the violent actions in Brittany, farmers elsewhere blocked roads with their tractors, including around Paris. Despite the disruptions these protests caused to the daily life of the average French citizen, the farmers did not face any negative public backlash, a further indication of the deep support and connections between farmers and urban France. Indeed, public polling concerning the image of farmers revealed that the public has a strong, positive image of farmers. According to a 2014 survey, shortly after the mass protests by farmers, just 26% of respondents were willing to describe farmers as selfish and only 16% of respondents agreed that farmers were violent. A resounding 80% agreed with the statement that farmers were trustworthy42 . After Prime Minister Jean-Marc Ayrault met with local officials from Brittany, the government proposed to “suspend” the tax until January. This concession, though it was expected to cost the government €800 million in revenue, was seen as insufficient, and tens of thousands of protesters continued to gather in the epicenter of resistance to the proposal, the town square of Quimper in Brittany. The tax was finally suspended indefinitely, pending a new proposal from the government.

France’s eco tax then, like to efforts to change CAP income support systems or greening policies, demonstrates that it is nearly impossible to impose new costs on farmers, without some degree of compensation or widespread exemptions. For example, new CAP greening standards that are costly for farmers to adhere to are typically coupled with subsidies for compliance. When some form of compensation is not offered, the reform is almost certain to be defeated. Thus, the eco tax was had little chance of success, given that farmers were not offered any compensation in exchange for this new cost being imposed on them. In June 2014, the Hollande government unveiled the final version of the eco tax plan, now called “truck tolls”. The new plan applied only to trucks weighing 3.5 tons or more and included just 4,000 kilometers of road, as against 15,000 kilometers in the original plan. In addition, all proposed roads in Brittany, the epicenter of the protests, were exempted from the tolls. Trucks carrying agricultural goods, milk collection vehicles, and circus related-traffic were also exempted. As a result of the transportation exemptions and significantly smaller area of coverage, the toll is expected to generate only a third of the revenue of the original plan.The French eco tax example shares much in common with CAP reform, particularly in the area of environmental policy. Proposed environmental policies in the CAP often mean that new costs will be imposed on farmers who are forced to conform to stricter standards and modify their farming methods in some way. These attempted reforms are virtually always modified by farmers in one of two ways: by extracting a new or additional form of compensation for meeting these rules or by compelling reformers to adopt exemptions, often so extensive that barely any farmers are subjected to new rules. In the case of the French eco tax, farmers followed the latter course: when faced with a tax that would have imposed new financial burdens on producers, they successfully compelled the government to completely exempt agriculture. The victory is all the more significant since these exemptions cost the government badly needed tax revenue at a time of austerity. The successful campaign against the eco tax highlights some of the new sources of power that farmers have developed. Organizations were one important source of power.

The FNSEA demonstrated the ability to coordinate its membership and to rely on regional branches to place pressure on both national and local politicians. In the fight against this tax, the FNSEA deployed multiple tactics to exert influence on the policy making process, mobilizing members for public demonstrations while simultaneously lobbying local and national political officials. The protesting French farmers also benefited from a sympathetic public that did not begrudge the massive disruptions and disturbances caused by demonstrations and blockades. While French farmers were able to use their powerful organizations to avoid a new, uncompensated tax, the same cannot be said of other groups. At virtually the same time farmers were thwarting a new tax, a series of austerity-driven pension reforms went ahead. Unlike the case of the eco tax,10 liter drainage collection pot protests did nothing to stop the reforms, and the policy changes were adopted despite widespread civil unrest. In 2010, then-president Nicolas Sarkozy proposed a series of reforms to the French pension system. The reforms included raising the retirement age from 60 to 62 along with increasing the age at which one qualifies for a full pension from 65 to 67. In addition, the number of years of required social security contributions increased from 40.5 to 41.5 years. In response to the proposed reforms, nearly 3 million people took to the streets, with plane and train travel severely disrupted and other sectors of the economy virtually shut down as the major unions called for strikes. Fuel shortages were a perpetual problem during the protests, as dock workers went on strike, leaving petrol stranded at ports. In addition, schools, ports, and airports were blockaded by demonstrators. In this case, however, coordinated protest was not able to compel the government to roll back reforms. Just a few years later, in 2014, Sarkozy’s successor, François Hollande enacted further reform to the French pension system. Contribution rates for both employers and employees were raised, a previously tax-exempt supplement for retirees who raised three or more children was made subject to taxation, and the number of years of required social security contributions was increased from 41.5 to 43 years. While France is generally viewed as farmer-friendly, the French case is not an outlier. Looking at other Western European countries, a similar pattern emerges. Pensions cuts were imposed, while national discretionary agricultural spending remained virtually untouched. Indeed, across Europe, pensions were significantly reformed in the wake of the 2008 financial crisis, placing new financial burdens on the average worker. This contrast between pension policies and agricultural expenditure is all the more glaring when the broader context is taken into account: less than two percent of the population benefits from agricultural support policies while all citizens are current or future pensioners.

Current spending levels are not a good indicator of reform, since much pension spending is locked in by decisions made decades ago. In the case of pensions, cuts are best identified by increases in the minimum retirement age or downward cost of living adjustments. Such reforms occurred in each of the four country cases, as summarized in Table 7.1.Germany reformed its pensions in 2007, just before the onset of the financial crisis, raising the retirement age from 65 to 67. In the UK, reforms raised the retirement age from 66 to 67. New reforms also increased the minimum number of years of contributions to qualify for a full pension from 30 to 35 years. A 2013 Dutch pension reform raised the minimum retirement age to 65 for workers currently under the age 55.While pensions were being cut across Europe, farmers were spared. At the EU level, in the first CAP reform after the financial crisis, spending on the CAP was not cut, and instead money was taken out of other areas in order to channel more support to farmers. Indeed, this reallocation of funds back into farming happened despite a stated objective of directing more money away from agriculture and into other objectives, like improving the provision of high speed internet. Spending on farmers was also preserved at the domestic level. European national governments spend some money on agriculture outside the CAP. National financing of agriculture comes via three main avenues: top-ups of Pillar 1 direct income payments; cofinancing of Pillar 2 programs ; and additional state aid payments to farmers by their national governments. Figure 7.1 tracks national agricultural expenditure as reported by the European Union in its annual statistical yearbook. The second mini case in this conclusion extends my claims about the politics of agricultural policy reform and the influence of the farming community beyond Europe to Japan. Like Europe, Japan has long committed to providing generous economic support to farmers in the form of subsidies, direct income payments, and protectionist trade policy. As in Europe, this support has persisted despite near simultaneous declines in the sector’s size and contribution to GDP. Figure 7.2 illustrates the decline in agriculture’s share of GDP in Japan, France, and the Netherlands. The latter two countries are the European Union’s top agricultural exporters.Like its European counterparts, agriculture’s contribution to GDP in Japan has dropped rapidly over the past 50 plus years. The economic decline of Japan’s agricultural sector has been quite similar to, if not more rapid than, the post-war economic decline of agriculture for Europe’s leading exporters. The decline in employment in agriculture over roughly the same period was also dramatic, and even more so in Japan. In a half century the sector went from employing nearly 40% of the population to under 5%, as Figure 7.3 illustrates. As in Europe, Japan’s agricultural sector has shrunk in size and economic importance since the end of World War II. In both of Europe’s top exporting countries and Japan, agriculture’s share of GDP is under 2% and the percent of the population employed in the sector has long been below 5%. Yet despite this decline, agricultural support has remained robust in both Europe and Japan. Figure 7.4 reports the Producer Support Estimate from 1986 to 2015 for Japan, the European Union and the United States, in millions of dollars.