Category Archives: Agriculture

We used Geographic Information System software to geocode the new addresses and obtain coordinates

There are no biomarkers available to assess human exposure to fumigants in epidemiologic studies . Residential proximity to fumigant use is currently the best method to characterize potential exposure to fumigants. California has maintained a Pesticide Use Reporting system which requires commercial growers to report all agricultural pesticide use since 1990 . A study using PUR data showed that methyl bromide use within ~8 km radius around monitoring sites explained 95% of the variance in methyl bromide air concentrations, indicating a direct relationship between nearby agricultural use and potential community exposure . In the present study, we investigate associations of residential proximity to agricultural fumigant usage during pregnancy and childhood with respiratory symptoms and pulmonary function in 7-year-old children participating in the Center for the Health Assessment of Mothers and Children of Salinas , a longitudinal birth cohort study of primarily low-income Latino farm worker families living in the agricultural community of the Salinas Valley, California. We enrolled 601 pregnant women in the CHAMACOS study between October 1999 and October 2000. Women were eligible for the study if they were ≥18 years of age, <20 weeks gestation, planning to deliver at the county hospital, English or Spanish speaking,square plant pot and eligible for low-income health insurance . We followed the women through delivery of 537 live-born children. Research protocols were approved by The University of California, Berkeley, Committee for the Protection of Human Subjects. We obtained written informed consent from the mothers and children’s oral assent at age 7. Information on respiratory symptoms and use of asthma medication was available for 347 children at age 7.

Spirometry was performed by 279 of these 7-year-olds. We excluded participants from the prenatal analyses for whom we had residential history information for less than 80% of their pregnancy. We excluded participants from the postnatal analyses for whom we had residential history information for less than 80% of the child’s lifetime from birth to the date of the 7 year assessment. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 257 children and postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 276 children for whom we obtained details of prescribed asthma medications and respiratory symptoms. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 229, 208, and 208 children for whom we had FEV1, FVC and FEF25–75 measurements, respectively. Postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 212, 193, and 193 children with FEV1, FVC and FEF25–75 measurements, respectively. A total of 294 participants were included in either the prenatal or postnatal analyses. Participants included in this analysis did not differ significantly from the original full cohort on most attributes, including maternal asthma, maternal education, marital status, poverty category, and child’s birth weight. However, mothers of children included in the present study were slightly older and more likely to be Latino than those from the initial cohort. Women were interviewed twice during pregnancy , following delivery, and when their children were 0.5, 1, 2, 3.5, 5, and 7 years old. Information from prenatal and delivery medical records was abstracted by a registered nurse. Home visits were conducted by trained personnel during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5-years old. At the 7-year-old visit, mothers were interviewed about their children’s respiratory symptoms, using questions adapted from the International Study of Asthma and Allergies in Childhood questionnaire . Additionally, mothers were asked whether the child had been prescribed any medication for asthma or wheezing/whistling, or tightness in the chest. We defined respiratory symptoms as a binary outcome based on a positive response at the 7- year-old visit to any of the following during the previous 12 months: wheezing or whistling in the chest; wheezing, whistling, or shortness of breath so severe that the child could not finish saying a sentence; trouble going to sleep or being awakened from sleep because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold; or having to stop running or playing active games because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold. In addition, a child was included as having respiratory symptoms if the mother reported use of asthma medications, even in the absence of the above symptoms.

Latitude and longitude coordinates of participants’ homes were collected during home visits during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5 years old using a handheld Global Positioning System unit . At the 7-year visit, mothers were asked if the family had moved since the 5-year visit, and if so, the new address was recorded. Residential mobility was common in the study population. We estimated the use of agricultural fumigants near each child’s residence using a GIS based on the location of each child’s residence and the Pesticide Use Report data . Mandatory reporting of all agricultural pesticide applications is required in California, including the active ingredient, quantity applied, acres treated, crop treated, and date and location within 1-square-mile sections defined by the Public Land Survey System . Before analysis, the PUR data were edited to correct for likely outliers with unusually high application rates using previously described methods . We computed nearby fumigant use applied within each buffer distance) for combinations of distance from the residence and time periods . The range of distances best captured the spatial scale that most strongly correlated with concentrations of methyl bromide and 1,3-DCP in air . We weighted fumigant use near homes based on the proportion of each square-mile PLSS that was within each buffer surrounding a residence. To account for the potential downwind transport of fumigants from the application site, we obtained data on wind direction from the closest meteorological station . We calculated wind frequency using the proportion of time that the wind blew from each of eight directions during the week after the fumigant application to capture the peak time of fumigant emissions from treated fields . We determined the direction of each PLSS section centroid relative to residences and weighted fumigant use in a section according to the percentage of time that the wind blew from that direction for the week after application.

We summed fumigant use over pregnancy , from birth to the 7-year visit and for the year prior to the 7-year visit yielding estimates of the wind-weighted amount of each fumigant applied within each buffer distance and time period around the corresponding residences for each child. We log10 transformed continuous fumigant use variables to reduce heteroscedasticity and the influence of outliers, and to improve the fit of the models. We used logistic regression models to estimate odds ratios of respiratory symptoms and/or asthma medication use with residential proximity to fumigant use. Our primary outcome was respiratory symptoms defined as positive if during the previous 12 months the mother reported for her child any respiratory symptoms or the use of asthma medications, even in the absence of such symptoms . We also examined asthma medication use alone. The continuous lung function measurements were approximately normally distributed,plastic potting pots therefore we used linear regression models to estimate the associations with residential proximity to fumigant use. We estimated the associations between the highest spirometric measures for children who had one, two or three maneuvers. We fit separate regression models for each combination of outcome, fumigant, time period, and buffer distance. We selected covariates a priori based on our previous studies of respiratory symptoms and respiratory function in this cohort. For logistic regression models of respiratory symptoms and asthma medication use, we included maternal smoking during pregnancy and signs of moderate or extensive mold noted at either home visit . We also included season of birth to control for other potential exposures that might play a causal role in respiratory disease , pollen , dryness , and mold. We defined the seasons of birth as follows: pollen , dry , mold based on measured pollen and mold counts during the years the children were born . In addition, we controlled for allergy using a proxy variable: runny nose without a cold in the previous 12 months reported at age 7. Because allergy could be on the causal pathway, we also re-ran all models without adjusting for allergy. Results were similar and therefore we only present models controlling for allergy. Additionally, for spirometry analyses only, we adjusted for the technician performing the test, and child’s age, sex and height. We included household food insecurity score during the previous 12 months , breastfeeding duration , and whether furry pets were in the home at the 7 year visit to control for other factors related to lung function. We also adjusted for mean daily particulate matter concentrations with aerodynamic diameter ≤ 2.5 µm during the first 3 months of life and whether the home was located ≤150m from a highway in first year of life determined using GIS, to control for air pollution exposures related to lung function. We calculated average PM2.5 concentration in the first 3 months of life using data from the Monterey Unified Air Pollution Control District air monitoring station.

In all lung function models of postnatal fumigant use, we included prenatal use of that fumigant as a confounder. To test for non-linearity, we used generalized additive models with three-degrees of-freedom cubic spline functions including all the covariates included in the final lung function models. None of the digression from linearity tests were significant ; therefore, we expressed fumigant use on the continuous log10 scale in multi-variable linear regression models. Regression coefficients represent the mean change in lung function for each 10-fold increase in wind-weighted fumigant use. We conducted sensitivity analyses to verify the robustness and consistency of our findings. We included other estimates of pesticide exposure in our models that have been related to respiratory symptoms or lung function in previous analyses of the CHAMACOS cohort. Specifically, we included child urinary concentrations of dialkylphosphate metabolites , a non-specific biomarker of organophosphate pesticide exposure using the area under the curve calculated from samples collected at 6-months, 1, 2, 3.5 and 5 years of age . We also included agricultural sulfur use within 1-km of residences during the year prior to lung function measurement . We used similar methods as described above for fumigants to calculate wind-weighted sulfur use, except with a 1-km buffer and the proportion of time that the wind blew from each of eight directions during the previous year. The inclusion of these two pesticide exposures reduced our study population with complete data for respiratory symptoms and lung function . Previous studies have observed an increased risk of respiratory symptoms and asthma with higher levels of p, p’– dichlorodiphenyltrichloroethylene or p, p’-dichlorodiphenyldichloro-ethylene measured in cord blood . As a sensitivity analysis, we included log10- transformed lipid-adjusted concentrations of DDT and DDE measured in prenatal maternal blood samples . We also used Poisson regression to calculate adjusted risk ratios for respiratory symptoms and asthma medication use for comparison with the ORs estimated using logistic regression because ORs can overestimate risk in cohort studies . In additional analyses of spirometry outcomes, we also excluded those children who reported using any prescribed medication for asthma, wheezing, or tightness in the chest during the last 12 months to investigate whether medication use may have altered spirometry results. We ran models including only those children with at least two acceptable reproducible maneuvers . We ran all models excluding outliers identified with studentized residuals greater than three. We assessed whether asthma medication or child allergies modified the relationship between lung function and fumigant use by creating interaction terms and running stratified models. To assess potential selection bias due to loss to follow-up, we ran regression models that included stabilized inverse probability weights . We determined the weights using multiple logistic regression with inclusion as the outcome and independent demographic variables as the predictors. Data were analyzed with Stata and R . We set statistical significance at p<0.05 for all analyses, but since we evaluated many combinations of outcomes, fumigants, distances and time periods we assessed adjustment for multiple comparisons using the Benjamini-Hochberg false discovery rate at p<0.05 . Most mothers were born in Mexico , below age 30 at time of delivery , and married or living as married at the time of study enrollment . Nearly all mothers did not smoke during pregnancy.

We measured changes in total distance moved and photomotor response from behavioral assays

We initiated all acute exposure tests within 24 h of surface water collection. Based on high invertebrate mortality previously observed in water from two of the sites, we made a dilution series of our water samples to capture a wider range of toxic effects including mortality and swimming behavior . For before first flush sampling, we used a dilution series of surface water concentrations—100%, 60%, 35%, 20%, and 12%—in order to evaluate the potential for a wide range of toxicological outcomes. We thoroughly mixed ambient surface water samples by agitation immediately before creating the dilutions in order to homogenize the turbidity levels between dilutions. To create the dilution series, we added control water to ambient surface water to achieve each desired concentration. We repeated this procedure at the 48 h point when performing an 80% water change on all treatment groups. For after first flush sampling, we used a broader dilution series—100%, 30%, 20%, 12%, and 6%—in anticipation of higher chemical concentrations based on previous studies. We tested temperature, total alkalinity, hardness, pH, and dissolved oxygen in situ using a YSI EXO1 multi-parameter water quality sonde at both test initiation and 48 h to ensure that the water remained within the acceptable ranges for D. magna. We chose exposure concentrations of CHL and IMI to mimic environmentally relevant concentrations found in monitored agricultural waterways, as well as experimental EC50/LC50 values. For both CHL and IMI, the low and high concentrations were 1.0 µg/L and 5.0 µg/L, respectively. We purchased chemicals from Accu Standard . We dissolved CHL in pesticide grade acetone to make chemical stock solutions, subsequently diluting it with EPA synthetic control water to a final concentration of 0.1 mL/L in exposure water. Due to its solubility, no solvent was needed to make an IMI stock solution. To account for this difference, we compared CHL treatment data to an acetone solvent control,square pot and IMI to the EPA synthetic control water. The California Department of Food and Agriculture Center for Analytical Chemistry analyzed these chemical stock solutions via LC-MS MS.

Chemical analysis of field water was conducted at the Center for Analytical Chemistry, California Department of Food and Agriculture using multi-residue liquid chromatography tandem mass spectrometry and gas chromatography– mass spectrometry methods. Chemicals were analyzed following procedures described in the Monitoring Prioritization Model as mentioned on the CPDR’s website. Chlorantraniliprole and IMI stock solutions were also analyzed to confirm exposure concentrations. The method detection limit and reporting limit for each analyte are listed in Tables S3–S6. Laboratory QA/QC followed CDPR guidelines provided in the Standard Operating Procedure CDPR SOP QAQC012.00. Extractions included laboratory blanks and matrix spikes. We performed behavioral assays at the 96 h time points for both the chemical exposures and for the field sampling exposures. We designed behavioral assays using Ethovision XT™ software , and adjusted the video settings to maximize the software’s detection of D. magna. We gently transferred organisms from test vessels into randomized wells in a non-treated 24 round-well cell culture plate containing 1 mL of control water at 20 C. We then left them to habituate for at least one hour before moving them to our behavioral assay set up for an additional five-minute acclimation period. The DanioVision™ Observation Chamber had a temperature-controlled water flow-through system, allowing us to keep organisms at optimal temperature throughout the assay. Our CCD video camera recorded the entire plate in which the organisms were held throughout the assay, so in this case 24 individuals were assessed at the same time. Using the Ethovision XT™ software, we then analyzed each video frame identifying the location of the organisms at each time point. Calculations were carried out to produce quantified measurements of the organisms’ behavior including both total distance moved and velocity. This assessment of horizontal movement over time, measured as total distance moved, is useful when trying to determine the changes in locomotor ability of organisms after exposure to pesticides. This system also allows us to control the dark:light cycle throughout the assay in order to measure endpoints related to a light stimulus, including photomotor response. We measured significant changes in photomotor responses as the change in mean distance traveled between the last 1 min of a light photo period and the first minute of the dark photoperiod as described in Steele et al. .

We checked data sets for normality using a Shapiro–Wilk test and applied log transformations before statistical analysis. We used a repeated measure ANOVA to analyze the effects over the light period. Statistical tests were defined by treatment as between-subject factors, and time as the within-subject factor. We applied Dunnett’s multiple comparison test for post hoc evaluation. Data are represented as mean ± standard error of the mean . We exported summary statistics from Ethovision XT using 1 min time bins for each treatment and analyzed the data in GraphPad Prism, version 9.0 . We determined significance of mortality data by Analysis of Variance followed by Dunnett’s test for multiple comparisons one-way analysis using GraphPad Prism, version 8.0. To measure the photomotor response of the organisms, we calculated the difference in distance moved between the last minute of the dark period and the first minute of the subsequent light period for each individual. These data sets were then log transformed and analyzed in GraphPad Prism using a one-way ANOVA with a Tukey’s Post Hoc test of multiple comparisons.Chemicals detected in the water samples collected in September are shown in Table S1, and are described in further detail in Stinson et al. 2021, a parallel study. In brief, of 47 pesticides analyzed, 17 were detected in our surface water samples, and each site contained a minimum of 7 target pesticides. Chlorantraniliprole was detected at all sites at concentrations below the acute lethality benchmarks for invertebrate species exposure . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure , and above the acute invertebrate level at Alisal Creek . Neonicotinoids were detected at all sites. Organophosphates were detected at two of the sites: Quail Creek and Alisal Creek. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, lambda-cyhalothrin, and bifenthrin . Several other chemical detections exceeded EPA benchmark values. Notably, methomyl was detected at Quail Creek at nearly three times the limit for chronic fish exposure ,blueberries in containers and above the EPA benchmark for chronic invertebrate exposure at all sites. Overall, Salinas River contained the smallest total number of chemicals at the lowest concentrations of the three sites we examined. Chemicals detected in water samples collected in November are shown in Table S2. Of 47 pesticides analyzed, 27 were detected in our surface water samples, and each site contained a minimum of 21 target pesticides.

Chlorantraniliprole was detected at all sites below the lowest benchmark . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure at Salinas River , Alisal Creek , and Quail Creek . Neonicotinoids and organophosphates were detected at all sites. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, cyfluthrin, lambda-cyhalothrin, bifenthrin, fenpropathrin, esfenvalerate . Overall, Salinas River contained the smallest total number of pesticides at the lowest concentrations of the three sites we examined. Repeated measures ANOVA showed there were no time-by-treatment interactions, but there were significant effects of treatment, on locomotor activity . Daphnia magna exposed to 35% and 20% surface water from Alisal Creek exhibited significantly hypoactivity compared to the control group under light conditions . Additionally, D. magna exposed to 20% surface water from Alisal Creek exhibited significant hypoactivity compared to the control group under dark conditions of the behavioral assay. Daphnia magna exposed to the highest concentration of surface water from Alisal Creek tested were significantly hypoactive during the last 5 min of the exposure period. Organisms exposed to all concentrations of surface water from Salinas River were hyperactive under light conditions with the two highest concentrations showing the greatest hyperactivity when compared to controls . There was no difference in total distance moved between organisms exposed to the Salinas River dilution series and the control group individuals in the dark period. The photomotor response for organisms exposed to surface water from both Alisal Creek and Salinas River followed a clear log-linear dose-response curve . Both the control and solvent control groups exhibited a reduction in movement consistent with a freeze response. Overall, Alisal Creek exposed organisms showed a greater magnitude of change than Salinas River exposed organisms. There were significant changes in photomotor response across all treatment groups, though responses differed between sampling sites. Daphnia magna exposed to water samples from Quail Creek demonstrated an inverse dose response pattern, where exposure to the lowest dilution gave the most significant change in photomotor response, and exposure to the highest dilution was not significantly different from control groups . The Alisal Creek treatment groups exhibited a non-monotonic dose response, with organisms exposed to the medium dosage having little to no response to light stimulus. The low dilution had a significantly lessened photomotor response pattern, and the highest dilution was not significantly different from the control group . Daphnia magna exposed to all concentrations of surface water from Salinas River had significantly altered photomotor responses as compared to controls. Organisms exposed to undiluted water samples from Salinas River demonstrated an opposite startle response of equal magnitude to the control’s freeze response.Physicochemical parameters for the exposure period are listed in Table S9. Following 96 h exposures, we measured no significant mortality in D. magna after exposure to CHL or IMI, at either the high or low concentrations following the 96 h acute exposure period .

Repeated measures ANOVA showed there were no timeby-treatment interactions for any experiment, but there were significant effects of both time and treatment, individually, on locomotor activity in the CHL/IMI data sets . Both the control and solvent control groups exhibited a large photomotor response consistent with freezing . After exposure to the low level of CHL, D. magna showed hypoactivity under dark conditions . For D. magna exposed to both low and high treatments of IMI, we saw significant hypoactivity during the entire behavior assay period, under both light and dark conditions . Exposure to mixtures of CHL and IMI resulted in divergent total distance moved measurements under both light and dark conditions. Individuals from the low CHL/low IMI treatment group were hypoactive in dark conditions. In contrast with the single chemical exposures, individuals from the high CHL/low IMI treatment group were hyperactive under light conditions. We measured significant changes in photomotor responses between the last 1 min of a light photoperiod and the first minute of the dark photoperiod . The change in total distance moved during the dark:light transition is shown in Figure 3D–F. For both CHL treatments, organisms exhibited no response to light stimulus , representing a nearly 60-fold difference in response from the control group. Organisms exposed to low IMI had an inverse response to light stimulus when compared to the control group, increasing their total distance moved in response to light stimulus. Organisms exposed to high IMI exhibited a reduction in their average total distance moved, but this response was fivefold smaller than controls. Mixtures of CHL and IMI resulted in the most divergent photomotor response, when compared with controls. Daphnia magna in all binary treatment groups, with the exception of the low CHL/low IMI group, showed an inverse photomotor response from controls. Surface water from all sites contained CHL and IMI as components of complex mixtures from surface water at all sites, both before and after a first flush event. Several chemicals detected from these sites are known to have sublethal effects on D. magna, including IMI, CHL, bifenthrin, clothianidin, malathion, methomyl, and lambda-cyhalothrin . The changes in pesticide composition and concentration between the sampling dates concurred with results from previous chemical analyses in this region. Pesticides of concern including CHL and IMI were detected at higher concentrations after the first flush event . A study examining first flush toxicity in California found that the concentration of pollutants was between 1.2 and 20 times higher at the start of the rain season versus the end. Interestingly, the sampling site with the highest increase in concentration after first flush, for several pesticides of concern, was the Salinas River site.

Discharges from agricultural non-point sources are inherently difficult to monitor because they are diffuse in nature

Agencies that supported the survey included the Monterey County Farm Bureau, the University of California Extension, the Agriculture and Land Based Training Association, and the Agricultural Water Quality Agency. Each agency requested results from the survey, as well as a presentation to their organization. Additionally, I plan on distributing a two-page summary of results to all growers who participated in the survey. Another part of this doctoral research that helped forge partnerships is through my work on Chapter 5. Data analysis in this chapter included spatial analysis of regional pesticide use over the past 13 years. In designing this chapter, I met with third-party monitoring agencies, G.I.S. technicians, and faculty members to ensure the highest quality data was used and that the research results would be of use to growers and policymakers. The spatial analysis of several pesticides known to be sources of water column and sediment toxicity in the region show the impacts, both negative and positive, of the primary regional agricultural water quality mandate that specifically targets two organophosphate pesticides. Results have already been distributed to Regional Water Quality Control Board staff members, who have passed them along to other networks and agencies. Research results from this dissertation have been and will continue to be shared with academic audiences, agricultural operators, policymakers, water quality agencies, and the general public in peer-reviewed publications, conference proceedings, reports, magazine articles, poster presentations, and oral presentations. Links to all published research are posted on my graduate student website. Throughout the data collection process, I maintained thorough records in both my notebooks and on electronic devices, and all stored electronic data have been backed up and preserved. Records of all interviews, survey questions and responses, datasets,large plastic pots and methodologies were retained to ensure reproducibility. I received exemption from IRB Review for both the interviews as well as the survey conducted in this research.

Agricultural non-point source pollution—runoff and leaching into nearby water bodies from nutrients, pesticides and soil sediments—is the chief impediment to achieving water quality objectives throughout the U.S. and Europe. Consequentially, policymakers cannot employ the old standbys used to regulate point sources of pollution, which are emitted from an identifiable pipe or outfall. Instead, regional, state, and federal agencies have typically relied on voluntary, incentive-based approaches to manage non-point source pollution . Such approaches have resulted in unsuccessful agriculture NPS control. In the U.S., agricultural pollution is the leading cause of pollution to rivers and lakes . And in Europe, agriculture contributes 50-80% of the total nitrogen and phosphorus loading to the region’s fresh waters and sea waters . The inadequacies of current approaches have triggered academic and regulatory discussions about how to proceed with abating non-point sources . These issues pose particularly challenging questions about appropriate regulatory tools, jurisdictional boundaries, funding needs, monitoring requirements, pollution permit allocations and stakeholder collaboration. Drawing from environmental policy and environmental economics literature as well as case studies from the U.S. and Europe, the aim of this chapter is to assess agricultural NPS pollution management approaches and the factors that drive or impede their implementation and enforcement. The E.U.’s recent Water Framework Directive presents an opportunity to build on lessons of the earlier-promulgated 1972 U.S. Clean Water Act, while the U.S. can benefit from the implementation and enforcement of effective European water pollution controls. This research presents several policy tool frameworks to help characterize the widespread non-point source pollution problem in the U.S. and Europe, distinguishing its unique set of hurdles from other environmental policy problems.

Findings suggest that controlling numerous diffuse sources of agricultural pollution requires an integrated approach that utilizes river basin management and a mix of policy instruments. Additionally, this chapter finds that transitioning from voluntary mechanisms to more effective instruments based on measurable water quality performance relies predominantly on three factors: more robust quality monitoring data and models; local participation; and political will.Since the passage of revolutionary water quality policies in the 1970s, the U.S. and Europe have seen significant water quality improvements in point source discharges—defined as any discernible, confined and discrete conveyance. Over the past 40 years, industrial pollution and discharges of organic wastes from urban areas and publicly owned treatment facilities have dropped substantially, and dissolved oxygen levels have increased downstream from point source pollution. This success can largely be attributed to the use of a transformative technology-based command-and-control approach, which employs standards to control pollutants at the point of discharge, setting uniform limitations based on the “Best Available Technology” for a given industry. Technology-based effluent limits have been enshrined in both the 1972 U.S. Clean Water Act and various European environmental policies. The technology-based regulatory framework skillfully transformed water quality regulation for point sources into a remarkably more streamlined and simplified system with successful results; it unfortunately neglected the different and more difficult task of controlling non-point source pollution. Instead, individual states in the U.S. and Member States/river basins in Europe have been entrusted with the monumental task of NPS pollution control. The 1972 Clean Water Act and subsequent amendments largely shape present-day water quality policies . During the drafting of the CWA, non-point source pollution was not perceived as serious of a problem as point source pollution , and was only considered as an afterthought . Prior to 1972, the nation’s general approach to water pollution was disjointed and highly variable—analogous to non-point source pollution regulation today. Control mechanisms were decentralized, which resulted in each state developing its own method of protecting water quality.

While several states attempted to implement innovative water quality standards and discharge permits, the vast majority failed to improve water quality conditions. A fundamental weakness of relying on ambient standards was that states needed to prove which polluters impaired water quality and to what extent. This endeavor was extremely difficult given that the regulatory agencies possessed very little data about the location, volume, or composition of industrial discharges . Even if data were available, water agencies were often understaffed, under budgeted and had inadequate statutory authority. By the 1960s, many of the country’s rivers and streams had reached such abominable conditions that a growing population of frustrated U.S. citizens turned to the federal government for help. After years of delay and struggle, the U.S. was ready to formulate a comprehensive, unified regulatory structure, resulting in the 1972 Clean Water Act. The Act employed a command-and-control approach to implement technology-based standards,raspberry container enforced by National Pollution Discharge Elimination System permits . This approach, aimed at controlling pollutants at the point of discharge, set uniform limitations based on the best available technology pertaining to a particular industrial category. To implement and monitor performance, every point source was required to obtain a permit to discharge. Under this innovative system, enforcement officials need only compare the permitted numerical limits with the permittee’s discharge. Technology-based effluent limits have transformed U.S. water quality regulation into a remarkably more streamlined and simplified system with successful results . In addition to the technology standards, the drafters of the Clean Water Act held on to the historic water quality-based approach, despite its observed inadequacies. In an attempt to bridge the gap between discharges and clean water , dischargers were expected to comply with more stringent, individually-crafted effluent limitations based on water quality standards . This additional control tool is only implemented when technology-based controls are not sufficient in meeting beneficial uses. The process entails a few ostensibly straightforward steps: first, the state lists each impaired waterbody within its jurisdiction; second, the state designates a “beneficial use” for each waterbody; third, a Total Maximum Daily Load or “TMDL” for each waterbody is calculated based on the designated beneficial use; and finally, a portion of the load is allocated to each point or non-point source. However, the fundamental problem of TMDLs is that they must be translated into specific numerical discharge limitations for each source of pollution . This endeavor is often prohibitively expensive and extremely difficult given that every step of the regulatory process— from identifying and prioritizing impaired waterbodies to allocating emissions loads to measuring the program’s success— suffers from insufficient and poor quality information . Monitoring data are needed to assess, enforce, evaluate and use as a baseline for modeling efforts. The task of collecting these emissions data—identifying polluters that are difficult to pinpoint, monitoring discharges that are stochastic and virtually impossible to track, and connecting diffuse effluents back to their sources—is so problematic they have been stamped “unobservable” . The paucity of information is often the result of another, more tangible limitation when implementing non-point source pollution abatement mechanisms: budgetary and administrative constraints. Funding the monitoring efforts as well as the staff time to adequately oversee water pollution control efforts is an obligatory, but often missing component in water management programs. Also, a lack of enforcement in areas where management practices are not protecting water quality remains a widespread problem throughout agricultural NPS programs .

While individual river basins and states have varying water quality issues and employ slightly different approaches to abate non-point source pollution, each bears the burden of these similar hindrances. Clearly, the challenges and complexities of non-point source water pollution are not amenable to technology and emission-based policy tools historically used. Current discussions on how to proceed with non-point source pollution abatement strangely and sadly mirror those occurring over forty years ago. In describing the difficulty of implementing water quality standards in the 1960s, Andreen presents several questions still debated today: How should regulators allocate the capacity of a stream to a multitude of diffuse dischargers? Should the allocations be recalculated every time there is a new or expanded discharge? What should be the boundaries of a receiving waterbody—an entire river system or should each tributary be considered separately? Likewise, Houck describes the current state of U.S. non-point source pollution policy as: “slid[ing] back into the maw of a program that Congress all but rejected in 1972, among other things, its uncertain science and elaborate indirection.” Similar to the U.S., the first surge of European water legislation began in the 1970s. This “first wave” was characterized by seven different Directives, which were initiated by individual Member States with little coordination with the larger E.U. community . During the late 1990s, mounting criticism on the fragmented state of water policy drove the European Commission to draft a single framework to manage water issues . The resulting legislation, the Water Frameworks Directive , has been championed as “the most far-reaching piece of European environmental legislation to date” . Adopted in December 2000, the WFD replaced the seven prior “first wave” directives. Just as the Clean Water Act passes down authority to states in the U.S., the WFD gives each Member State and its river basins the same responsibility. Under this “second wave,” the WFD requires that River Basin Management Plans be established and updated every six years. The RBMPs specify how environmental and water quality standards will be met, allowing local authorities the flexibility to comply as they best see fit. The WFD mandates that all river basins must achieve “good” overall quality, and that more stringent standards need to be applied to a specific subset of water bodies used for drinking, bathing and protected areas. Two additional requirements of the WFD are economic analyses of water use and public participation in the policy implementation process. The E.U. chose management at the river basin level, a hydrological and geographical unit, rather than political boundaries, to encourage a more integrated approach to solving water quality problems . Another distinguishing aspect of the WFD is its “combined approach,” which guides Member States’ choice of policy tools. Similar to the U.S. CWA approach, technology controls based on Emissions Limit Values, such as those embedded in the previous E.U. Integrated Pollution Prevention and Control Directive , are implemented first. The IPPC works similarly to the U.S. NPDES permit system , requiring all major industrial dischargers to obtain a permit and comply with specific discharge requirements. If these emissions and technology-based instruments are not sufficient in meeting water standards, then Environmental Quality Standards are employed. The Water Framework Directive provides opportunities and challenges for all actors involved—Member States, European Commission, and candidate countries .

Correlated measurements of the same task may be solved using a Bayesian interpretation as well

Cell growth / viability assays are chemical indicators that correlate with viable cell number such as metabolism or DNA / nuclei count and can also be used to quantify the effect of media on cells. In chapter 5 we conducted many experiments with different assays and show the inter-assay correlations in Figure 1.3. Notice no assay is perfectly correlated with any other assay because they are collected with different methodologies and fundamentally measure different physical phenomena. For example, AlamarBlue measures the activity of the metabolism in the population of cells, so optimizing a media based on this metric might end up simply increasing the metabolic activity of the cells rather than their overall number. As some of these measurements can be destructive / toxic to the cells , continuous measurements to collect data on the change in growth can be tedious. Collecting high-quality growth curves over time may be accomplished using image segmentation and automatic counting techniques. Using fluorescent stained cells and images, segmentation can be done using algorithms like those discussed. Cells may even be classified based on their morphology dynamically if enough training data is collected to create a generalizable machine learning model. Successfully quantifying the ability of media to grow cells forms the backbone of the novelty of this dissertation. The primary means by which this dissertation will improve cell culture media is through the application of various experimental optimization methods, often called design-of-experiments . The purpose of DOEs are to determine the best set of conditions to optimize some output by sampling a process for sets of conditions in an optimal manner. If an experiment is time / resource inefficient, then optimizing the conditions of a system may prove tedious. For example,gallon pot doing experiments at the lower and upper bounds of a 30-dimensional medium like DMEM requires 2 30 109 experiments. This militates for methods that can optimize experimental conditions and explore the design space in as few experiments as possible. DOEs where samples are located throughout the design space to maximize their spread and diversity according to some distribution are called space-filling designs.

The most popular method is the Latin hypercube , which are particularly useful for initializing training data for models and for sensitivity analysis. Maximin designs, where some minimum distance metric is maximized for a set of experiments, can also allow for diversity in samples, with the disadvantage being that in high dimensional systems the designs tend to be pushed to the upper and lower bounds. Thus, we may prefer a Latin hypercube design for culture media optimization because media design spaces may be >30 factors large. Uniform random samples, Sobol sequences, and maximum entropy filling designs, all with varying degrees of ease-of-implementation and space-filling properties, also may be used. It cannot be known a priori how many sampling points are needed to successfully model and optimize a design space because it is dependent on the number of components in the media system, degree of non-linearity, and amount of noise expected in the response. Because of these limitations, DOE methods that sequentially sample the design space have gained traction, which will be talked about in the next section. A more data-efficient DOE is to split up individual designs into sequences and use old experiments to inform the new experiments in a campaign. One sequential approach is to use derivative-free optimizers where only function evaluations y are used to sample new designs x. DFOs are popular because they are easy to implement and understand, as they do not require gradients. They are also useful for global optimization problems because they usually have mechanisms to explore the design space to avoid getting stuck in local optima. The genetic algorithm is a common DFO where a selection and mutation operator is used to find more fit combinations of genes . In Figure 1.7, notice the GA was able to locate the optimal region of both problems regardless of the degree of multi-modality. [9] used a GA to optimize media for rifamycin B fermentation in bacteria where the HPLC titer at the end of 9 days was used to select high performing media combinations from nine metabolites for the next batch of experiments. They allowed for a 1% chance of mutation of each experiment and componentto allow for global search.

They also discovered that the response space was multi-modal and had interactions between components, which confirmed the need for global optimization of fermentation and bio-processing problems discusses 17 cases in which GAs have improved media for different organisms for chemical fermentation often by > 50% yields for problems of > 10 media components. Particle swarm optimization is a population-based method that optimizes systems sequentially based on varying x according to a velocity vector v. At the tth iteration of the algorithm a particle x will have the velocity update rule vt+1 = wvt +c1r1 +c2r2 for random numbers r1, r2, coefficients w, c1, c2 . c1 and c2 parameterize the exploration-exploitation trade-off, similar to the mutation rate in the GA. w represents the fraction of velocity saved for the next iteration t + 1. To implement this one merely computes xt+1 = xt + vt for a large population of particles over time as the population gradually gravitates to the optimal designs. The Nelder-Mead simplex method, wherein a group of points is moved closer to better values via expansion and contraction steps, is also a popular DFO method. Nelder-Mead is a local optimizer and may be hybridized with other global DFO methods to improve convergence. While DFOs don’t require gradient calculations and can usually optimize complex multi-modal optimization problems , they require 100’s, if not 1000’s, of experiments so are limited to fast growing culture systems or computer experiments where experiments are somewhat costless. The most powerful experimental optimization technique is arguably the model-based sequential DOE, in which a response-surface model of the relationship between the input x and output y data is trained, and new samples are constructed based on the predictions of the trained model. The newly collected data is then fed back into the model and used to generate another sequence of samples discusses using combinations of screening DOEs and polynomial RSM to optimize conditions for the fermentation of metabolites such as chitinase, γ-glutamic acid, polysaccharides, chlortetracycline and tetracycline among 20 other metabolites from various organisms. This demonstrates the usefulness of RSMs for fermentation and culture optimization.

The primary limitation of polynomial RSMs is their inability to accurately model many factors at a time or systems with significant nonlinearity. Due to their generalizability to modeling different response surfaces,gallon nursery pot neural networks have been used to optimize bioreactor cultures and multi-objective protein storage conditions. Radial basis function have been used to optimize yeast and C2C12 mammalian muscle cell culture growth media. Decision trees and neighborhood analysis have been used to optimize media for antibiotics and bacteria fermentation. An example of an RSM can be seen in Figure 1.8 where a radial basis function maps the input / output relationship in a nonlinear system, then a GA finds new optimal experiments. Over time the predicted contour looks similar to the true function. While these RSMs tend to be more generalizable compared to polynomial and linear models, low-data experimental campaigns common in fermentation and cell culture often obscure the differences between modeling techniques. Additionally, many of these RSM approaches do not take into account prior information about the system to speed up optimization. Due to the noisiness of fermentation data it may be useful to consider noise in our process models. Known or unknown constraints can be incorporated into GPs as well. For example, a known constraint might be that growth must exceed some minimum value. An unknown constraint might be the existence of excessive foaming in bioreactors, which may be learned from data, but is generally not known ahead of time. Multiple objectives, some of which may compete against one another, can be modeled and optimized using GPs and correlations between tasks may be considered. By correlating measurements, fewer total experiments are often needed. Multi-objective versions of acquisition functions α such as maxvalue entropy search and hypervolume improvement exist to turn these GP predictions into a score for a variety of objectives. Fermentation and cell culture systems are often subject to growth vs cost trade-offs so multi-objective Bayesian methods are useful here. Because most bio-processing experiments can be done using multiple bioreactors or cell culture plates, designing multiple optimal experiments at a time is often necessary shows how, using monte-carlo samples of the GP model, arbitrary numbers of experiments can be designed simultaneously. Knowledge that systems may exhibit separate but interacting local and global responses may militate for additive GPs.. Experimenters with access to separate computer simulations or algebraic process models may pose their GPs as composites of deterministic or other modeled functions and speed up optimization. Bayesian models may even fuse historical data-sets together to estimate optimal model parameters with constrained uncertainty, and could perhaps be used for optimization as well . More closely related to cell culture media optimization, GPs have been used in a Bayesian optimization scheme to optimize C2C12 growth media for proliferation maximization and cost minimization in chapter 5 of this dissertation.This dissertation is divided into roughly two equal parts. The first part are comprised of the development of a radial basis function genetic algorithm sequential DOE scheme. It drew heavily on the work of, where a sequential DOE technique was developed on the principle of local random search in areas of high performing media.

This algorithm was also dynamic by converging on high performing results and selectively searching the design space when good results were not forthcoming. Additionally, previous work in our lab  provided the framework for a sequential DOE based on a truncated GA. This modified GA incorporates uncertainty in the optimal samples found by halting algorithm convergence proportional to the amount of clustering around an optima the GA finds. By hybridizing these two methods, a DOE algorithm called NNGA-DYCORS was developed that solved various computational optimization problems better than either method alone. It was used to optimize a 30-dimensional media for serum-containing C2C12 cell culture with the metric of growth being AlamarBlue reduction after 48 hrs of growth in 96 well plates . Cells were seeded at the same time, concentration, and from the same frozen innoculum so that all experiments were roughly the same. While it was successful at finding media that maximized this metric , the optimal medium did not grow as many cells over additional passages. To fix this underlying problem, multiple passages needed to be incorporated into the DOE process. This is a very time-consuming process as each passage takes multiple days, many more physical manipulations than simple chemical assays which introduces opportunities for contamination, and difficulty for manual experimentation. To solve this, chemical assays were supplemented with small amounts of manual multi-passage cell counts in a multi-information source Bayesian GP model which was used to successfully optimize a 14-dimensional serum-containing media for C2C12 cells . Due to the presence of multi-passage data, the final optimal medium grew cells robustly over four passages, provided nearly twice the number of cells at the end of each passage relative to the DMEM + 10% FBS control and traditional DOE method, and did so at nearly the same cost in terms of media components. In the final chapter the multi-information source GP model was extended to optimize a 26-dimensional serum-free media based on the Essential 8 media using a multi-objective metric that improves cell growth while minimizing medium cost. Using this Bayesian metric, a broad set of media samples along the trade-off curve of media quality and cost were found, showing that a designer can be given options in media optimization. In particular, one medium resulted in higher growth over five passages while the control and Essential 8 lagged. We identify two important future considerations for this work. First, the data collection process, which is the major innovation of this dissertation, needs to be made more robust by actually capturing the long-term growth dynamics of the cells. Fluorescent and bright field imaging, used to quantify the temporal and spatial changes of the cells, may improve over whole-well AlamarBlue and LIVE/DEAD stains by couting individual cells and collecting more fine-grained growth curves.

The United States and the EU differ in their philosophy and practice for the regulation of PMP products

The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bio-active proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions,drainage pot including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts.

The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].4 The revision will be implemented in steps and could facilitate the field based production of PMPs. In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. Many of these regulations focus on the use of food species as hosts. The major perceived risks of open-field cultivation are the contamination of the food/feed chain, and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA .

In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 . In Europe, bio-pharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process. This is because many of the DSP techniques are product-dependent and, therefore, similar regardless of the platform, including most of the DSP equipment, with which regulatory agencies are already familiar. Of course, the APIs themselves must be fully characterized and shown to meet designated criteria in their specification, but this applies to all products regardless of source.During a health emergency, such as the COVID-19 pandemic, regulatory agencies worldwide have re-assessed guidelines and restructured their requirements to enable the accelerated review of clinical study proposals, to facilitate clinical studies of safety and efficacy, and to expedite the manufacturing and deployment of re-purposed approved drugs as well as novel products .

These revised regulatory procedures could be implemented again in future emergency situations. It is also possible that some of the streamlined procedures that can expedite product development and regulatory review and approval will remain in place even in the absence of a health emergency, permanently eliminating certain redundancies and bureaucratic requirements. Changes in the United States and European regulatory processes are highlighted, with a cautionary note that these modified procedures are subject to constant review and revision to reflect an evolving public health situation.In the spring of 2020, the FDA established a special emergency program for candidate diagnostics, vaccines, and therapies for SARS-CoV-2 and COVID-19. The Coronavirus Treatment Acceleration Program 5 aims to utilize every available method to move new treatments to patients in need as quickly as possible, while simultaneously assessing the safety and efficacy of new modes of intervention. As of September 2020, CTAP was overseeing more than 300 active clinical trials for new treatments and was reviewing nearly 600 preclinical-stage programs for new medical interventions. Responding to pressure for procedural streamlining and rapid response, the FDA refocused staff priorities,drainage planter pot modified its guidelines to fit emergency situations, and achieved a remarkable set of benchmarks . In comparison to the review and response timelines described in the previous section, the FDA’s emergency response structure within CTAP is exemplary and, as noted, these changes have successfully enabled the rapid evaluation of hundreds of new diagnostics and candidate vaccine and therapeutic products.The European Medicines Agency has established initiatives for the provision of accelerated development support and evaluation procedures for COVID-19 treatments and vaccines. These initiatives generally follow the EMA Emergent Health Threats Plan published at the end of 2018 . Similar to FDA’s CTAP, EMA’s COVID-19 Pandemic Emergency Task Force aims to coordinate and enable fast regulatory action during the development, authorization, and safety monitoring of products or procedures intended for the treatment and prevention of COVID-19 . Collectively, this task force and its accessory committees are empowered to rapidly address emergency use requests . Although perhaps not as dramatic as the aspirational time reductions established by the FDA’s CTAP, the EMA’s refocusing of resources and shorter response times to accelerate the development and approval of emergency use products are nevertheless laudable. In the United Kingdom, the MHRA6 has also revised customary regulatory procedures to conform with COVID-19 emergency requirements by creating 6 MHRA regulatory flexibilities resulting from coronavirus .During a public health emergency, one can envision the preferential utilization of existing indoor manufacturing capacity, at least in the near term. Processes making use of indoor cultivation and conventional purification can be scrutinized more quickly by regulatory agencies due to their familiarity, resulting in shorter time-to-clinic and time-to-deployment periods. Although many, perhaps most, process operations will be familiar to regulators, there are some peculiarities of plant-based systems that differentiate them from conventional processes and, hence, require the satisfaction of additional criteria. Meeting these criteria is in no way insurmountable, as evidenced by the rapid planning and implementation of PMP programs for SARS-CoV-2/COVID-19 by PMP companies such as Medicago, iBio, and Kentucky Bio-processing.

During emergency situations when speed is critical, transient expression systems are more likely to be used than stable transgenic hosts, unless GM lines were developed in advance and can be activated on the basis of demand . The vectors used for transient expression in plants are non-pathogenic in mammalian hosts and environmentally containable if applied indoors, and by now they are well known to the regulatory agencies. Accordingly, transient expression systems have been deployed rapidly for the development of COVID-19 interventions. The vaccine space has shown great innovation and the World Health Organization has maintained a database of COVID-19 vaccines in development,8 including current efforts involving PMPs. For example, Medicago announced the development of its VLP-based vaccine against COVID-19 in March 2020, within 20 days of receiving the virus genome sequence, and initiated a Phase I safety and immunogenicity study in July.9 If successful, the company expects to commence Phase II/III pivotal trials by late 2020. Medicago is also developing therapeutic antibodies for patients infected with SARS-CoV-2, and this program is currently in preclinical development. Furthermore, iBio has announced the preclinical development of two SARS-CoV-2 vaccine candidates, one VLP and one subunit vaccine.10 Kentucky Bio-processing has announced the production and preclinical evaluation of a conjugate TMV-based vaccine and has requested regulatory authorization for a first in-human clinical study.These efforts required only a few months to reach these stages of development and are a testament to the rapid expression, prototyping, and production advantages offered by transient expression.The PMP vaccine candidates described above are all being developed by companies in North America. The rapid translation of PMPs from bench to clinic reflects the conformance of chemistry, manufacturing, and control procedures on one hand, and environmental safety and containment practices on the other, with existing regulatory statutes. This legislative system has distinct advantages over the European model, by offering a more flexible platform for discovery, optimization, and manufacturing. New products are not evaluated for compliance with GM legislation as they are in the EU and the United States but are judged on their own merits. In contrast, development programs in the EU face additional hurdles even when using well-known techniques and even additional scrutiny if new plant breeding technologies are used, such as the CRISPR/Cas9 system or zinc finger nucleases .Process validation in manufacturing is a necessary but resource intensive measure required for marketing authorization. Following the publication of the Guidance for Industry “Process Validation: General Principles and Practices,” and the EU’s revision of Annex 15 to Directive 2003/94/EC for medicinal products for human use and Directive 91/412/EEC for veterinary use, validation became a life-cycle process with three principal stages: process design, process qualification, and continuous process verification . During emergency situations, the regulatory agencies have authorized the concurrent validation of manufacturing processes, including design qualification , installation qualification , operational qualification , and performance qualification .

Size of household landholding is included in the model to explore the effects of scale on fertilizer use

To provide a more accurate assessment of the household and environmental factors associated with household use of inorganic fertilizer, we undertake econometric analysis to explore determinants of fertilizer adoption and use intensity. Limited dependent variables models are often used to evaluate farmers’ decision-making process concerning adoption of agricultural technologies. Those models are based on the assumption that farmers are faced with a choice between two alternatives and the choice depends upon identifiable characteristics . In adopting new agricultural technologies, the decision maker is also assumed to maximise expected utility from using a new technology subject to some constraints . In many cases a Probit or Logit model is specified to explain whether or not farmers adopt a given technology without considering the intensity of use of the technology. The Probit or Logit models cannot handle the case of adoption choices that have a continuous value range. This is the typical case for fertilizer adoption decisions where some farmers apply positive levels of fertilizer while others have zero application . Intensity of use is a very important aspect of technology adoption because it is not only the choice to use but also how much to apply that is often more important. The Tobit model of Tobin can be used to handle such a situation. However, the Tobit model attributes the censoring to a standard corner solution thereby imposing the assumption that non-adoption is attributable to economic factors alone . A generalization of the Tobit model overcomes this restrictive assumption by accounting for the possibility that nonadoption is due to non-economic factors as well. Originally formulated by Cragg ,drainage collection pot the double-hurdle model assumes that households make two sequential decisions with regard to adopting and intensity of use of a technology. Each hurdle is conditioned by the household’s socio-economic characteristics. In the double-hurdle model, a different latent variable is used to model each decision process.

The first hurdle is a sample selection equation estimated with a Probit model.It is important to first define what is meant by fertilizer adoption. For Probit estimation, a household is regarded as an adopter of fertilizer if it was found to be using any inorganic fertilizer. The dependent variable in this model is a binary choice variable which is 1 if a household used inorganic fertilizer and 0 if otherwise. For the second hurdle , fertilizer adoption becomes continuous and the dependent variable is the amount of fertilizer applied per acre of cultivated land by a household. There is no firm economic theory that dictates the choice of which explanatory variables to include in the double-hurdle model to explain technology adoption behaviour of farmers. Nevertheless, adoption of agricultural technologies is influenced by a number of interrelated components within the decision environment in which farmers operate. For instance, Feder et al. identified lack of credit, limited access to information, aversion to risk, inadequate farm size, insufficient human capital, tenure arrangements, absence of adequate farm equipment, chaotic supply of complimentary inputs and inappropriate transportation infrastructure as key constraints to rapid adoption of innovations in less developed countries. However, not all factors are equally important in different areas and for farmers with different socio-economic situations. In this section, we discuss the appropriateness of different variables considered in our model. The household characteristics deemed to influence fertilizer adoption in this study include household heads characteristics , household size and dependency ratio. The conventional approach to adoption study considers age to be negatively related to adoption based on the assumption that with age farmers become more conservative and less amenable to change. On the other hand, it is also argued that with age farmers gain more experience and acquaintance with new technologies and hence are expected to have higher ability to use new technologies more efficiently. Education enhances the allocative ability of decision makers by enabling them to think critically and use information sources efficiently. However, since fertilizer is not a new technology, education is not expected to have strong effects on its adoption.

The effect of household size on fertilizer adoption can be ambiguous. It can hinder the adoption in areas where farmers are very poor and the financial resources are used for other family commitments with little left for purchase of farm inputs. On the other hand, it can also be an incentive for fertilizer adoption as more agricultural output is required to meet the family food consumption needs . Institutional and infrastructural factors considered important in fertilizer adoption in this study include access to credit, farm size, presence of a cash crop, distance to fertilizer market, distance to extension service provider and distance to motorable road. The size of landholding is expected to be positively correlated with fertilizer adoption, as farmers with bigger landholding size are assumed to have the ability to purchase improved technologies and the capacity to bear risk if the technology fails . However, the well-documented tendency for management intensity to decline with scale in tropical Africa suggests that land size will be negatively correlated with the intensity of fertilizer use. Lack of access to cash or credit does significantly limit the adoption of fertilizer but the choice of appropriate variable to measure access to credit remains problematic. On a discussion on the limitations, challenges and opportunities for improving technology adoption using micro-studies, Doss outlines the different measures often used but cautions the inherent problems of these methods, especially their endogeneity.

Doss suggests that whether a farmer had ever received cash credit is a better measure of credit access than whether there is a source of credit available to the farmer. This study measures credit access by looking at whether a household received or did not receive any credit during a cropping year. The presence of a major cash crop 1 in the household is included in the model to capture the influence of commodity based inputs delivery systems in fertilizer adoption. In Kenya, commodities such as tea, coffee and sugar cane have inputs credit schemes for farmers. Because inputs markets are widely distributed, farmers face travel costs when they buy inputs. Since the volumes of fertilizer purchases by smallholder farmers are not high and the location of fertilizer market can be inconvenient,round plastic pot the cost of travelling to purchase fertilizer is probably fixed over the quantities purchased. The distance to fertilizer market is thus expected to affect decision on whether or not to use fertilizer, but not the intensity of use. Exposure to information reduces subjective uncertainty and, therefore, increases likelihood of adoption of new technologies . Various approaches have been used to capture information including: determining whether or not the farmer was visited by an extension agent in a given time; whether or not the farmer attended demonstration tests for new technologies by extension agents; and the number of times the farmer has participated in on-farm tests. Due to absence of such data for this study, we use distance to extension service provider to capture the influence of information on adoption. To explore the impact of infrastructure, which influences market access for both inputs and outputs, on fertilizer use, we include the distance to motorable road as a variable in the model. To measure the influence of agro ecological factors on fertilizer adoption, we include dummies for agro ecological zones. The high potential maize zone is used as the base. The Coastal, Eastern and Western lowlands and Marginal rain shadow receive less rainfall and are prone to prolonged and frequent dry spells compared to the Central and Western highlands, Western transitional and High potential maize zone. Agro ecology variables pick up variation in rainfall, soil quality, and production potential. These variables may also pick up variation unrelated to agricultural potential, such as infrastructure and availability of markets for inputs and outputs. A summary description of the explanatory variables used in the model is presented in Table 1.Generally, the proportion of sampled households using fertilizer rose from 64% in 1997 to 76% in 2007. However, these proportions vary considerably across agro ecological zones. The High Potential Maize Zone, Western Highlands and Central Highlands had the highest proportion of the households applying fertilizer. On the other hand, the proportion of households using fertilizer has remained relatively lower in the drier regions of Coastal Lowlands , Western Lowlands , Marginal Rain Shadow and Eastern Lowlands . A notable increase in the proportion of households using fertilizer in Western Transitional was observed; from 58% in 1997 to 88% in 2007.Trends in fertilizer use by cultivated land size are presented in Table 3. Landholding size is considered one of the indicators of wealth in Kenya. Two observations are made on the trends. First, across all the panel years the proportion of households adopting fertilizer increased with increasing cultivated land size. This may indicate that households with larger landholdings have greater ability to acquire and use fertilizer. Second, the proportion of households using fertilizer increased between 1997 and 2007 across all categories of cultivated land sizes.A more detailed analysis of fertilizer use on selected crops across the panel period is presented in Table 4. The number of households producing maize has remained high and about the same over the panel period, pointing to the importance attached to maize by the smallholder farmers.

The proportion of these households using fertilizer on maize consistently increased during the panel period from 57% in 1997 to 71% in 2007. On the contrary, the intensity of fertilizer application on maize has fluctuated between 55kg and 60kg per acre over the panel period. It is important to note that the application rates reported here are far below those recommended per acre for maize by the Kenya Agricultural Research Institute ; 50 kg of DAP and 60 kg of CAN, resulting to a total of 110 kg. The proportion of households applying fertilizer on coffee declined between 1997 and 2007 by 16%. Similarly, fertilizer application rate on coffee plummeted by 20% over the same period. A closer look reveals that the application rate consistently declined from 364 kg/acre in 2000 to 147 kg/acre in 2007, an average decline of 148% in a span of seven years. The gloomy picture in fertilizer use patterns on coffee can be attributed to two main factors: alleged mismanagement of coffee cooperatives, which are the main channels through which members receive their fertilizer; and the poor international coffee prices. Mismanagement in the cooperatives has made some farmers abandon coffee production while other farmers have opted to directly access fertilizers from private traders. This has made them disadvantaged in that they no longer access input credit facilities offered by the cooperatives as was the custom during the days when the cooperative movements were active and efficiently managed.With respect to tea, the fertilizer application rate has declined from 385 kg/acre in 1997 to 371 kg/acre in 2007. This decline is, however, marginal. The proportion of tea growing households using fertilizer on tea has, on the other hand, increased from 84% in 1997 to 98% in 2007. The fertilizer distribution system in the tea sector is the reason behind the impressive performance in fertilizer adoption on tea. The Kenya Tea Development Agency supplies fertilizer on credit to smallholder tea farmers and then deducts the cost plus interest from their deliveries of tea, which is sold by KTDA on behalf of the farmers. Fertilizer adoption on sugarcane over the panel period has showed an impressive increase. Households using fertilizer has grown from 29% in 1997 to 69% in 2007. However, the application rate has fluctuated over the study period. Increased fertilizer adoption in smallholder sugarcane farming can be attributed to provision on credit of fertilizer and other inputs to small holder cane farmers by the cooperatives to which the farmers belong. On the other hand, the dwindling fertilizer application rate can be attributed to inadequate supply of fertilizer by the cooperatives relative to farmers’ demand, or it may be as a result of farmers’ diversion of fertilizer acquired from the cooperatives from use on sugarcane to use on other crops. Ariga, et al., observed that some of the fertilizer acquired for intended use on the cash crops such as coffee and sugarcane under cooperative schemes is appropriate for use on maize and most horticultural crops as well, and there is likely to be some diversion of fertilizer targeted for use on sugarcane and coffee to food crops.

The minimum number of years of coverage required to receive a full pension was also increased

The parallels between the ways that farmers defend their policies and thwart unwanted policy changes at the domestic and EU levels can be made clear by looking at a case in which a national government attempted to impose new costs on their agricultural community without offering compensation. In 2013, Socialist French President François Hollande attempted to implement the so called “eco tax” first put forward by his conservative predecessor, Nicolas Sarkozy. The eco tax was intended to promote greener commercial transportation by imposing a tax on heavy vehicles. Under the plan, any vehicle over 3.5 tons would be taxed a flat rate of .13€ per kilometer traveled on 15,000 kilometers of roads included in the scheme. The government expected the tax to generate over €1 billion in revenue annually. The eco tax was slated to come into effect beginning 1 January 2014. The government’s proposal was immediately met with criticism from the main French farmers’ organization, the FNSEA. The organization described the tax as an “usine à gaz”, a situation where pipes are going everywhere and the system is overly complex. Through thus turn-of-phrase, the FNSEA meant to convey that the eco tax was a complicated procedure with little actual value or payoff. The FNSEA argued that the tax would place a significant burden on the agricultural community, particularly farmers in Brittany, who had suffered significantly from the financial crisis, and demanded that it be suspended immediately. Other critics raised concerns that Breton farmers might be driven out of business as a result of higher transportation costs. In addition to the concerns about its effects on Breton farmers, the FNSEA warned that French goods would pass through the tax gates more often than trucks carrying foreign goods, putting French farmers at a disadvantage compared to farmers’ goods arriving from abroad. Xavier Beulin,10 liter pot the leader of the FNSEA, promised immediate action against the proposal, directing members to target the “portiques” that were intended to scan the trucks as they passed underneath.

Beulin called on farmers from other parts of France, even from those areas without the tax scanners, to join the protests. The call for action was successful, as a wave of angry protests erupted in Brittany and across France. In Brittany, the heart of the demonstrations, protesters gathered in main town squares, many wearing red caps, or bonnets rouges in a reference to a 17th-century protest against a stamp tax proposed by Louis XIV. Some protestors threw stones, iron bars, and potted chrysanthemums at riot police, while others destroyed the electronic scanners intended to collect the fee from passing trucks. The protesters included not just farmers, but also the broader public, who were rallying to oppose taxes, with some also supporting the farmers specifically. In addition to the violent actions in Brittany, farmers elsewhere blocked roads with their tractors, including around Paris. Despite the disruptions these protests caused to the daily life of the average French citizen, the farmers did not face any negative public backlash, a further indication of the deep support and connections between farmers and urban France. Indeed, public polling concerning the image of farmers revealed that the public has a strong, positive image of farmers. According to a 2014 survey, shortly after the mass protests by farmers, just 26% of respondents were willing to describe farmers as selfish and only 16% of respondents agreed that farmers were violent. A resounding 80% agreed with the statement that farmers were trustworthy42 . After Prime Minister Jean-Marc Ayrault met with local officials from Brittany, the government proposed to “suspend” the tax until January. This concession, though it was expected to cost the government €800 million in revenue, was seen as insufficient, and tens of thousands of protesters continued to gather in the epicenter of resistance to the proposal, the town square of Quimper in Brittany. The tax was finally suspended indefinitely, pending a new proposal from the government.

France’s eco tax then, like to efforts to change CAP income support systems or greening policies, demonstrates that it is nearly impossible to impose new costs on farmers, without some degree of compensation or widespread exemptions. For example, new CAP greening standards that are costly for farmers to adhere to are typically coupled with subsidies for compliance. When some form of compensation is not offered, the reform is almost certain to be defeated. Thus, the eco tax was had little chance of success, given that farmers were not offered any compensation in exchange for this new cost being imposed on them. In June 2014, the Hollande government unveiled the final version of the eco tax plan, now called “truck tolls”. The new plan applied only to trucks weighing 3.5 tons or more and included just 4,000 kilometers of road, as against 15,000 kilometers in the original plan. In addition, all proposed roads in Brittany, the epicenter of the protests, were exempted from the tolls. Trucks carrying agricultural goods, milk collection vehicles, and circus related-traffic were also exempted. As a result of the transportation exemptions and significantly smaller area of coverage, the toll is expected to generate only a third of the revenue of the original plan.The French eco tax example shares much in common with CAP reform, particularly in the area of environmental policy. Proposed environmental policies in the CAP often mean that new costs will be imposed on farmers who are forced to conform to stricter standards and modify their farming methods in some way. These attempted reforms are virtually always modified by farmers in one of two ways: by extracting a new or additional form of compensation for meeting these rules or by compelling reformers to adopt exemptions, often so extensive that barely any farmers are subjected to new rules. In the case of the French eco tax, farmers followed the latter course: when faced with a tax that would have imposed new financial burdens on producers, they successfully compelled the government to completely exempt agriculture. The victory is all the more significant since these exemptions cost the government badly needed tax revenue at a time of austerity. The successful campaign against the eco tax highlights some of the new sources of power that farmers have developed. Organizations were one important source of power.

The FNSEA demonstrated the ability to coordinate its membership and to rely on regional branches to place pressure on both national and local politicians. In the fight against this tax, the FNSEA deployed multiple tactics to exert influence on the policy making process, mobilizing members for public demonstrations while simultaneously lobbying local and national political officials. The protesting French farmers also benefited from a sympathetic public that did not begrudge the massive disruptions and disturbances caused by demonstrations and blockades. While French farmers were able to use their powerful organizations to avoid a new, uncompensated tax, the same cannot be said of other groups. At virtually the same time farmers were thwarting a new tax, a series of austerity-driven pension reforms went ahead. Unlike the case of the eco tax,10 liter drainage collection pot protests did nothing to stop the reforms, and the policy changes were adopted despite widespread civil unrest. In 2010, then-president Nicolas Sarkozy proposed a series of reforms to the French pension system. The reforms included raising the retirement age from 60 to 62 along with increasing the age at which one qualifies for a full pension from 65 to 67. In addition, the number of years of required social security contributions increased from 40.5 to 41.5 years. In response to the proposed reforms, nearly 3 million people took to the streets, with plane and train travel severely disrupted and other sectors of the economy virtually shut down as the major unions called for strikes. Fuel shortages were a perpetual problem during the protests, as dock workers went on strike, leaving petrol stranded at ports. In addition, schools, ports, and airports were blockaded by demonstrators. In this case, however, coordinated protest was not able to compel the government to roll back reforms. Just a few years later, in 2014, Sarkozy’s successor, François Hollande enacted further reform to the French pension system. Contribution rates for both employers and employees were raised, a previously tax-exempt supplement for retirees who raised three or more children was made subject to taxation, and the number of years of required social security contributions was increased from 41.5 to 43 years. While France is generally viewed as farmer-friendly, the French case is not an outlier. Looking at other Western European countries, a similar pattern emerges. Pensions cuts were imposed, while national discretionary agricultural spending remained virtually untouched. Indeed, across Europe, pensions were significantly reformed in the wake of the 2008 financial crisis, placing new financial burdens on the average worker. This contrast between pension policies and agricultural expenditure is all the more glaring when the broader context is taken into account: less than two percent of the population benefits from agricultural support policies while all citizens are current or future pensioners.

Current spending levels are not a good indicator of reform, since much pension spending is locked in by decisions made decades ago. In the case of pensions, cuts are best identified by increases in the minimum retirement age or downward cost of living adjustments. Such reforms occurred in each of the four country cases, as summarized in Table 7.1.Germany reformed its pensions in 2007, just before the onset of the financial crisis, raising the retirement age from 65 to 67. In the UK, reforms raised the retirement age from 66 to 67. New reforms also increased the minimum number of years of contributions to qualify for a full pension from 30 to 35 years. A 2013 Dutch pension reform raised the minimum retirement age to 65 for workers currently under the age 55.While pensions were being cut across Europe, farmers were spared. At the EU level, in the first CAP reform after the financial crisis, spending on the CAP was not cut, and instead money was taken out of other areas in order to channel more support to farmers. Indeed, this reallocation of funds back into farming happened despite a stated objective of directing more money away from agriculture and into other objectives, like improving the provision of high speed internet. Spending on farmers was also preserved at the domestic level. European national governments spend some money on agriculture outside the CAP. National financing of agriculture comes via three main avenues: top-ups of Pillar 1 direct income payments; cofinancing of Pillar 2 programs ; and additional state aid payments to farmers by their national governments. Figure 7.1 tracks national agricultural expenditure as reported by the European Union in its annual statistical yearbook. The second mini case in this conclusion extends my claims about the politics of agricultural policy reform and the influence of the farming community beyond Europe to Japan. Like Europe, Japan has long committed to providing generous economic support to farmers in the form of subsidies, direct income payments, and protectionist trade policy. As in Europe, this support has persisted despite near simultaneous declines in the sector’s size and contribution to GDP. Figure 7.2 illustrates the decline in agriculture’s share of GDP in Japan, France, and the Netherlands. The latter two countries are the European Union’s top agricultural exporters.Like its European counterparts, agriculture’s contribution to GDP in Japan has dropped rapidly over the past 50 plus years. The economic decline of Japan’s agricultural sector has been quite similar to, if not more rapid than, the post-war economic decline of agriculture for Europe’s leading exporters. The decline in employment in agriculture over roughly the same period was also dramatic, and even more so in Japan. In a half century the sector went from employing nearly 40% of the population to under 5%, as Figure 7.3 illustrates. As in Europe, Japan’s agricultural sector has shrunk in size and economic importance since the end of World War II. In both of Europe’s top exporting countries and Japan, agriculture’s share of GDP is under 2% and the percent of the population employed in the sector has long been below 5%. Yet despite this decline, agricultural support has remained robust in both Europe and Japan. Figure 7.4 reports the Producer Support Estimate from 1986 to 2015 for Japan, the European Union and the United States, in millions of dollars.

Eastern Europe already lagged behind the West in terms of existing environmental practices

Farms that were labor intensive, thus providing jobs in the local community, could have up to €8,000 exempted from dynamic modulation, at the member state’s discretion. Though this program seemed to be cutting overall levels of spending, the money garnished from farmer income payments was not leaving the CAP but rather being redirected into other CAP programs. Member states would keep a portion of the money for rural development and environmental programs, while the rest would be re-distributed among member states “on the basis of agricultural area, agricultural employment, and prosperity criteria to target specific rural needs” . Through this system of redistribution, and by garnishing the payments of the farmers who earned the most, dynamic modulation would contribute to achieving the twin goals of reducing the disparity in payments between large and smaller farmers and improving the distribution across member states. Dynamic modulation is an example of using the welfare state tactic of turning vice into virtue in the context of agricultural policy reform. Specifically, the dynamic modulation reform revised an existing program , reorienting this CAP program to operate more equitability. As with vice into virtue in the world of the social welfare state, an existing program that was operating inefficiently and inequitably was corrected through reform, rather than eliminating the policy entirely and attempting to replace it. Payments for all farmers above a certain threshold would be reduced, and collected funds would be redeployed to other areas of need. This objective of reducing the disparity in payment levels within and across countries was taken increasingly seriously,hydroponic vertical garden as inequality in the operation of CAP support payments was beginning to garner attention beyond EU technocrats.

The Commission noted that dynamic modulation would “allow some redistribution from intensive cereal and livestock producing countries to poorer and more extensive/mountainous countries, bringing positive environmental and cohesion effects” . The redirection of funds from income payments to rural development programs was also atangible way for EU officials to signal a stronger commitment to the CAP’s social and environmental objectives. These social and environmental objectives had been identified by the public via Eurobarometer surveys as both the most important objectives of the CAP and areas where the CAP was failing to meet existing expectations. Also included in the dynamic modulation package was a proposal to cap the amount of direct aid any individual farmer could receive at €300,000 a year. This proposal was motivated by the desire to prevent large farms from receiving what many considered to be exorbitant sums of money. Specifically, it would address public concerns over the inequality in the operation of CAP payments. The payment cap was also intended to help correct the problem of an inequitable distribution of support within and across countries. This limit would reduce the overall gap between the largest and smallest recipients. In addition, it would begin to correct for payment imbalances among member states, as most of the farmers who would be subjected to the income cap were concentrated in a few member states. The inclusion of a cap on income payments is another example of CAP reformers employing the vice into virtue technique, which has been similarly used by welfare state reformers to correct welfare programs that are operating inefficiently or producing unequal outcomes. The third and final reform was mandatory cross compliance. In Agenda 2000, cross compliance was adopted only in voluntary form. In the MTR, Fischler sought to make this program compulsory. Under cross-compliance, direct payments could be made conditional on achieving certain environmental goals. The income payment could, for example, be reduced if a farmer failed to comply with a given environmental rule. Farmers who met the standards would receive the full amount of direct payments for which they were eligible, but would not receive a bonus for full compliance.

Farmers who received direct payments would be required to maintain all of their land in good agricultural and environmental condition; if not, payment reductions were to be applied as a sanction . The inclusion of cross-compliance in Agenda 2000 positioned Fischler to make further reforms in the MTR, because he had already softened the ground in the previous agreement. As Fischler noted, “all the components of cross compliance [in the MTR proposal] were things that were already in place since Agenda 2000, but the member states had been responsible for implementing them. However, most members didn’t do it, or did a lousy job of implementing them” . Leading Commission officials argued that the member states had already approved and accepted the concept of cross compliance, so there was no reason that it should be rejected during the MTR. In reality, the vast majority of member states had chosen not to implement any of the standards or rules because cross compliance was an optional program. Still, Fischler was able to put them on the defensive for “failing” to implement Agenda 2000. As Fischler explained, “farmer ministers were put in a hard spot because now they had to account for failure to implement all of these measures in the past. They couldn’t oppose the concept of cross-compliance because they had already agreed to it, so they made the usual complaint that it would hurt farmers, but that’s always their line” . Fischler saw cross compliance as a legitimacy-boosting technique because it tied eligibility for support to compliance with environmental conditions and standards . Cross-compliance would help address public criticism of the CAP by strengthening the greening component and further developing the image of the farmer as a provider of not just food, but broader public goods and services. Mandatory cross-compliance could also attenuate the image of the farmer as a polluter.Fischler’s proposal for the MTR was sent to the College of Commissioners for formal discussion, revision, and approval.

The proposal was well received by the Commission overall. Fischler was respected within the Commission as an agricultural expert and a reformer . The way for his proposal was further smoothed, thanks to an October 2002 agreement engineered by Chirac and Schröder at the Brussels European Council meeting, which guaranteed that the agricultural budget for direct-market supports would not be cut before 2013, when a new budget would be drafted . Even though Commission President Romano Prodi had previously expressed a desire to cut the CAP by up to 30%, the ChiracSchröder deal prevented him from doing so, despite the fact that he was supported by other Commissioners who hoped to use these CAP cuts to direct more support into their own portfolios. The deal to not cut the CAP budget was extracted by France in exchange for supporting enlargement, and allowed the budget to increase by 1% each year until 2013 . This agreement was a major victory for France and the CAP, as the EU’s multi-annual financial framework at the time called for an automatic annual cut in the CAP budget . The proposal designed by Fischler and his team was also well received by the Commission because it addressed several of the main issues that provided the impetus for reform: food safety and quality, environmental impact, imbalances in the distribution of CAP support, and the CAP’s impeding of trade negotiations. Food safety and quality issues were addressed by cross compliance. Decoupling of payments and cross compliance handled the issue of environmental impact, while dynamic modulation confronted the problem of inequities in CAP support distribution. Finally, decoupling brought the CAP support payments into the WTO green box,vertical vegetable tower and thus into compliance with existing WTO rules on agricultural subsidies. The core components of the proposed CAP reform were also structured so that they would directly address the challenge posed by enlargement. Doing away with payments tied to production and instead basing income support on historical yields tied to holding size would save the CAP money in both the short and long term. Farms in the East were, on balance, much smaller and less productive than those in the West. As a result, their calculated income support payment would be comparatively low. In addition, there was no risk that, as these farmers gained access to improved resources and technology enabling them to improve their output, the CAP would have to fund larger payments. Instead, income payments would be tied to a low historic yield. Cross-compliance would serve as a further check on the amount of funds dispersed to the new member states.

Farmers in new member states would have difficultly meeting and adhering to these new standards, resulting in reductions in the funds paid to them. Countering some of these effects, modulation would allow some funds to be redirected from richer to poorer countries The MTR was the last opportunity to reform the CAP before the candidate countries would be full members of the European Union, and thus party to CAP negotiations. Unlike previous reforms, it would be much risker to put off or delay making reforms to the operation of the CAP. Even adopting reforms that were optional but not binding, as had been done in the past, was risky. If these changes, ones that were necessary to save the CAP but were deeply unpopular in the East, were not taken immediately, they would not be in the future because the new member states would band together to block them. The only component of Fischler’s proposal that was significantly revised by the Commission was dynamic modulation. The Commission altered the rules governing eligibility for modulation and income payment limits. Though the revised proposal maintained an exemption for farms earning less than €5,000, it added a provision stating that only those farms earning over €50,000 would be subjected to the full 19% reduction in direct income payments prescribed by modulation in order to ensure that small holders would not be targeted. In addition, the final version of the Commission proposal removed the €300,000 limit on total income payments. The Commission also revised how the money collected under dynamic modulation would be redistributed. The new version significantly reduced the amount of money that would be directed to general rural development objectives and increased the amount that was to be set aside to fund future CAP reforms. This change was made in order to accommodate the rules that emerged from the Chirac-Schröder deal at the Berlin Summit in 2002. Specifically, it ensured that there would be some funds in reserve to uphold the agreement from the deal that allowed for a 1% annual increase in the CAP budget. These amendments to the Commission’s proposal were important victories for both larger and small farmers. Larger farmers avoided a cap on how much support they could receive and small farmers were granted important exemptions and protections from reductions in their income payments under dynamic modulation. After review and revision by the Commission, the official package of proposals was sent to the European Council on 23 January 2003. Among the member states, France and UK were the key players. France led the effort to block the reform while the UK was the primary member state that Fischler worked with to achieve the necessary votes to pass his reforms via Qualified Majority Voting . France was the leader of the anti-reform camp and used its relationship with Germany to cement a blocking minority, while the UK proved central to breaking the French-led blocking minority. Three groups emerged after the reforms were announced. The first group, the pro-reform coalition, consisted of the Denmark, the Netherlands, Sweden, and the UK. This group of countries favored reforms that would make the CAP more market-oriented. Sweden was a vocal new partner of the pro-reform club. Upon joining the EU, Sweden had been required to reintroduce subsidies, which the government had removed in the early 1990s after a period of substantial agricultural policy reform . Sweden was thus a strong supporter of reforms that would move the CAP in a market-oriented direction. Other members of this group had long been proponents of market-oriented reforms. Agriculture in each of these countries was marked by the predominance of large holdings and/or highly efficient farming. Agricultural and political elites expressed the belief that their farmers, in general, would benefit from freer competition and the removal of support programs that served to prop up inefficient competitors in other member states. Within this group, the UK also objected to modulation. As one of the member states with the largest farms, the British felt that this policy, if adopted, would disproportionately negatively affect its farmers.

Fischler and the Commission wanted to reinforce the role the farmers played in maintaining the countryside

Essentially, this category served to exempt the US deficiency payments and CAP direct income area- and headage-based payments from these reduction commitments . EU officials considered it highly likely that these payments, since they were not fully decoupled from production, and thus remained trade distorting, and the blue box more broadly, would come under fire in future negotiation rounds, with some speculating that the blue box might be eliminated entirely. Adding to the concern over the survival of the “blue box” was the United States’ adoption of the Federal Agricultural Improvement and Reform Act, also known as the Freedom to Farm Act. The FAIR Act introduced a system of direct payments, completely decoupled from production, that replaced the existing deficiency scheme. In addition, the FAIR Act stipulated that these payments would be reduced over a period of seven years . With the passage of the FAIR Act then, the blue box existed only to provide special status and exemption for the CAP payment system. Despite concerns about what future rounds of WTO negotiations might mean for some core components of the CAP, it was not enough to push the member states into undertaking meaningful reform. The MacSharry Reform negotiations were concurrent with actual GATT talks, while Agenda 2000 began, was negotiated, and concluded before the new WTO round was even launched. For Agenda 2000, trade-related concerns had ultimately little impact because they were all hypothetical: the special status of CAP payments could disappear; partially decoupled payments might not fit within the new WTO scheme; the US’s FAIR Act might be a sticking point between the US and the EU.

In addition,vertical hydroponics the trade conflicts between the US and the EU at this time were not really about the operation of the CAP as they had been in the GATT UR. In sum, the major events and issues that disrupt politics and allow for extensive reform to be achieved did not operate during Agenda 2000. Enlargement was thought to be a non-issue, and any potential trade issues were, at best, hypothetical. As a result, Fischler had to negotiate his reform under politics as usual. The importance of disruptive politics to achieving meaningful reform is clearly illustrated by the case of Agenda 2000, since no major adjustments to CAP policy were achieved, with major initiatives either being made optional or rejected outright.Fischler and the Commission had four main objectives for the Agenda 2000 reform: 1) to extend the systems of price cuts and direct income compensation started under MacSharry in 1992; 2) to reduce the CAP budget and improve financial discipline, particularly in light of the transition to the Euro and the financial strictures involved with that transition; 3) to rebalance the distribution of CAP benefits across member states and sectors of production; and 4) to overhaul and simplify the CAP’s rural development and environmental schemes by putting them into a single framework, the so called “second pillar” . The first objective was particularly important with Guy Legras, still head of DGVI, stating, “you might call [the new reform proposal] MacSharry Mark II” . To extend MacSharry, the Commission sought to continue to reduce price supports, in order to bring prices closer to the world level, and to increase direct income payments. Objectives 2 and and 3 followed the same model as they had in previous negotiations- cut CAP costs to the extent possible and attempt to adopt a system that would limit the payments received by the largest farmers, facilitating better distribution of payments across countries while also improving support for small farmers. This latter point, directing more support to small farmers, was seen as important to preserving the social acceptability of the CAP to the broader public.

Finally, the fourth objective, like the first, was part of a continuation of a bigger project, begun under MacSharry.They sought to direct more funds to agri-environmental measures so as to better support sustainable rural development and better meet the growing environmental demands of the broader public . A major discussion of a potential CAP reform occurred in the late summer and early fall of 1997, after the Commission had formally launched Agenda 2000 in a document called “Agenda 2000: For a Stronger and Wider Europe”. In reference to the CAP, the general document on Agenda 2000 called for compensated price cuts to arable crops, beef, and dairy, a commitment to rural development and agri-environmental measures, and ceilings on income payments in an effort to mitigate perceived inequalities in the system . Reform along the lines proposed by Agenda 2000 would, the Commission argued, increase the EU’s agricultural competitiveness, improve food safety and quality, advance the fundamental CAP goal of stable farm incomes , promote sustainable agriculture, and simplify EU legislation . Under this initial Agenda 2000 announcement that set the scope for thenegotiations, agriculture would remain the single largest program in the EU, consuming roughly 45% of the budget, with structural funds remaining the second largest, accounting for just over 35% of EU spending . Agricultural Commissioner Franz Fischler publicly defended the need for reform, arguing in an editorial in the Frankfurter Allgemeine Zeitung, that: “acting as though everything would stay the same as in the past without reform is verging on a lie” . He further stated that the reform’s main objective was protecting farmer incomes, and predicted that Agenda 2000 would improve farmer welfare. Beyond making this public defense of the CAP in the German press, Fischler also undertook a tour of the member state capitals, much like MacSharry did before the 1992 reforms. In so doing, Fischler hoped to get some sense of the political acceptability of his reform goals. In addition, he began to negotiate some elements of the reform in the hope of making the general Commission proposal more acceptable and limiting negative reaction.

At the end of the tour, despite some divergent opinion, Fischler found that the balance of support was in favor of “maintain[ing] the status quo, with only slight modifications to the CAP” . The Commission formally made its proposals for Agenda 2000 in March of 1998. The package consisted of four main components: 1) intervention price cuts for arable goods, beef, and dairy, with partial compensation in the form of direct payments, 2) a system of modulation and price ceilings; 3) cross compliance; and 4) a package of rural development policies.Overall, the reforms sought to continue MacSharry’s legacy by cutting prices and maintaining quotas in exchange for increased direct compensation. For beef and dairy, these cuts would come in one step, but would be offset by increasing the amount that farmers received via their direct payments. In an effort to continue MacSharry’s objective of keeping milk production under control, the Commission proposed extending quotas for a further 6 years, while also allowing a 2% increase in a farmer’s production limit. Other dairy products like butter and milk powder would follow a program similar to that for beef and cereals, with the price cut offset by an increase in compensation. The Commission once again attempted to address the issue of inequality in payments and to respond to the public criticism of CAP payment operations and spending levels by introducing payment ceilings and other mechanisms to reduce the amount of funds directed towards Europe’s largest farmers. The Commission sought to impose a 20% cut on all payments over 100,000 ECUs and a 25% cut on all payments over 200,000 ECUs. The other payment-related initiative, modulation, was intended not to reduce the CAP budget but rather to redistribute aid among farmers and member states and also to reinforce the second pillar,hydroponic vertical farming systems as a portion of the money collected would be earmarked specifically for rural development and environmental programs and policies. Specifically, member states could make some adjustments to the amount of financial support a farmer received based on the number of persons employed on the farm. Those savings would then be redistributed to those farmers and member states that were disadvantaged and to support second pillar goals and programs. The Commission attempted to improve environmental accountability and to advance the perception of the CAP as promoting the multifunctional role of farmers, as both producers of food and stewards of the environment. The main tools through which the Commission sought to achieve these goals were cross-compliance and a series of reforms designed to direct funding and support to issues related to rural communities. Cross-compliance would tie the receipt of direct income payments to adherence to a set of basic environmental standards. This program was to be mandatory, applying to all farmers. Finally, a series of smaller reforms were designed to provide support for young farmers, to fund early retirement, to support training programs and opportunities, and to provide additional support for those farming in “less favored areas” and to provide compensation for farmers engaging in approved agri-environmental activities.Three broad camps emerged after the publication of the Commission’s formal proposal. The first group, the pro-reformers, was led by the UK and Sweden but also included the Netherlands and Denmark. These countries welcomed the reform, but felt that the Commission had not gone far enough.

They preferred a bigger reduction in intervention prices and the eventual elimination of subsidies and income support payments. These countries favored the development of a more market-oriented European agricultural sector. In addition, the UK expressed opposition to modulation. The second group, led by Germany, and also including Austria, Belgium, Ireland, Luxembourg, and Portugal, all had some significant problems with the reform as it was proposed. Germany was among the most staunchly opposed, preferring the status quo. The German agricultural minister Jochen Borchet stated that he could see “very few positive things” in the proposal . The third and final group included the remaining member states who, rather than take a strong position for or against the reform proposal, “emphasized the specific interests of their national agricultural sectors, and declared their firm intention to defend these interests in the upcoming reform negotiations” . For example, Spain was concerned that increasing spending on the CAP would make it more likely that structural funds would be targeted as a way to find more resources. Italy wanted an end to milk quotas, Greece and Portugal desired reform for Mediterranean products, and Finland and the Netherlands preferred changes to formulas for compensation . The French supported a reform that would continue along and expand the reform path started by MacSharry by lowering prices in exchange for a transition to direct payments, but were dissatisfied with Commission proposals for compensation. Specifically, France sought to protect and increase compensation for small, particularly livestock producers. France’s position in these negotiations was particularly interesting given that it was a period of cohabitation, with Jacques Chirac and the right-leaning Rassemblement pour la République controlling the presidency while Lionel Jospin, a member of the Parti Socialiste , was the prime minister. This configuration arguably strengthened the farmers’ hand, as neither side wanted to tip the balance in favor of their political opponent. The left wanted to find a way to distribute CAP money more equitably but was confronted with a president who, according to a former minister, was “united with the FNSEA. Chirac was their spokesman. He was most concerned with the cereal farmers from the grand Parisian basin and was forgetting everyone else19” . The co-habitation government agreed on France’s other major priorities aside from how best to distribute income support. The former government minister identified three other priorities. The first was to defend the economic interests of France in agriculture on the grounds that “the CAP was France’s program. Germany and France are the core of the EU. Industry was for Germany and agriculture was for France. France was the number one beneficiary and the CAP was the largest program. We needed to defend this status”. The second priority was to channel more money to the second pillar and rural development and environmental objectives. The third and final priority was to extend the MacSharry reforms by getting rid of price supports and transitioning to direct payment. France was arguably in a stronger negotiating position compared to Germany because the German finance and agricultural ministers were constantly at war with each other.

France and Germany were both reluctant to adopt major agricultural reform

Mansholt asserted that CAP price policy encouraged and allowed marginal farms to stay in business. His plan’s core claim was that the only practical way to increase farmer incomes was for farms to become larger and more modern businesses. To make farms larger, there would necessarily have to be fewer of them. Achieving the objective of creating larger and more efficient farms, Mansholt argued, would meet the CAP’s goal of increasing agrarian incomes. Moreover, higher incomes would reduce dependence on high prices, allowing these prices to be lowered, which would in turn remove incentives to overproduce. The result, in the long run, would be lower EAGGF support costs and an efficient farming sector. Essentially, Mansholt’s plan was oriented around improving farmer incomes by removing farmers from the land in order to increase the average holding size. Reduced production and CAP spending were uncertain outcomes that would only emerge in the long term. Mansholt asserted that 5 million people would need to be removed from agriculture between 1970 and 1980. His proposal included several options to encourage exit. Exiting farmers could be offered either retirement pensions or compensation and training for a new profession. To prevent rural depopulation, however, Mansholt suggested that regional plans be implemented to bring jobs to the countryside. For those remaining in farming, financial assistance would be available for the modernization and expansion of their farms. Finally, because the remaining farms would ostensibly be larger and more productive, he recommended that 5 million hectares be taken out of agriculture and devoted to re-afforestation. The removal of agricultural land would prevent a worsening of the surplus situation because it would limit the ability of farmers to both keep excess labor in farming and expand the areas devoted to certain crops known for higher yields, such as grains and sugars . If the land were permanently removed from production, it could not be bought up and used by the highly-productive farmers already benefiting from the current system. In addition to better controlling production,vertical garden indoor system this initiative to both remove land from production and engage in a re-afforestation effort would help to reduce the Community’s dependence on timber imports.

Despite its efforts to address the real crises facing the CAP, the Mansholt Plan was poorly received by farmer groups and politicians. Farmer groups criticized it extensively, dubbing Mansholt “The Peasant Killer” because they perceived of the plan as an existential threat to their constituencies. For their part, politicians, wary of farmer voting power and the sway of agricultural lobbies, declined to engage in formal discussions of the plan. One issue that made discussion difficult from the start was that Sicco Mansholt’s understanding of the family farm was very different from that of key member states. Mansholt’s plan aligned with the Dutch assessment of a family farm as a unit that could support a family when run professionally, using modern techniques. The other member states saw the family farm as the key socio-cultural institution of Europe’s countryside, thus requiring its preservation. To the non-Dutch member states, Mansholt’s plan portended the destruction of the family farm as they knew it, thus rendering the plan politically unpalatable and fundamentally unacceptable. The fundamental problem, though, was that Mansholt undertook his reform initiative at a time of politics as usual. In 1968, when his memorandum was published, there were no ongoing trade negotiations. Moreover, not only was there no looming enlargement, but the prospects of accession in general seemed grim, with French President Charles de Gaulle blocking British membership. Paradigmatic reform, like the Mansholt Plan proposed, is essentially impossible to achieve under politics as usual. Mansholt’s plan faced strong resistance from farmers, and there was no disruptive event to overcome this resistance. Even though Mansholt did not take on the issue of surpluses directly and instead focused on the size of the farming community, he was unable to overcome the refusal of other key actors to accept the need for CAP reform. For these reasons, the fundamental problems plaguing the CAP’s operation carried on into the ensuing decades. The CAP’s unresolved production problems and their associated financial expenditures continued to build in the years following Mansholt’s unsuccessful initiative. The high cost of disposal was all the more alarming, given that in 1970 the CAP accounted for 75% of the Community’s budget.

A new funding agreement for the CAP which was reached in 1969 and would be implemented in 1975, would provide much needed stability for the financing of this incredibly expensive program. Previously, national contributions to the Community were settled through acrimonious negotiations. Under the new plan, the CAP, and by extension the Community, would have its own resources. Specifically, levies on agricultural imports and customs duties were to accrue to the Community. National value added tax receipts, up to a maximum VAT rate of 1%, would meet expenditures in excess of what could be covered by revenue from the levies . Due to delays in harmonization of the member states’ VAT systems, the VAT component of the financing was not implemented until 1979. Essentially, this financial program served to prop the CAP up without fixing it by providing a large, dedicated source of funding. Overproduction, and the associated costs, continued to drive up spending. By 1986, CAP annual spending had reached 56 billion ECU , up from an average of 30 billion ECU between 1979 and 1981. The 1984 Fountainebleu agreement attempted to stabilize expenditures in agriculture by limiting spending increases to 2% annually. The agreement, however, provided no incentive to compel individual farmers to cut back on production. In the wake of limited change, production continued unchecked, and overall expenditure continued to increase at a rate of 18% per year. By 1987, the CAP was violating the policy’s own financial regulations by running a budget deficit between 4 and 5 billion ECU, which, at the time, “was concealed through cleaver accounting” . Despite the swelling agricultural budget, farmers did not necessarily become richer. Rather, most farmer incomes held steady or declined because these new funds were directed towards costs associated with exports and/or maintaining the growing surplus. This decline in farmer incomes made reform even more difficult, particularly any proposals that would cut prices, since this strategy would hurt farmer incomes that were already not improving, despite a growing CAP budget. Yet, other than a major overhaul of the CAP, cutting prices paid to farmers for their production was the quickest solution for the CAP’s twin problems of out of control spending and excess production. The 1988 Stabilizer Reform was negotiated under politics as usual. Enlargement was not a pressing issue, as Spanish and Portuguese accession had been completed two years prior, and the next round of enlargement would not be until 1995. Although the Uruguay Round had been launched in 1986, negotiations were slow to get underway,vertical garden growing and it was not yet clear that the CAP was playing a key role in forestalling progress. With farmer interests dominating CAP policy making, only incremental change was possible. The CAP would be patched up by the 1988 Stabilizer Reform, rather than fundamentally overhauled.François Mitterrand and Helmut Kohl were facing major elections. In Mitterrand’s case, he was attempting to prevent a strong challenge from his prime minister, Jacques Chirac, in the 1988 presidential election. Both Mitterrand and Chirac “believed that the agricultural vote would play a crucial role in the election outcome” and thus were reluctant to challenge farmer preferences9 . In Germany, the Christian Democratic Union /Christian Social Union was facing tighter Länder elections in two states with significant agricultural populations, and believed that they would lose votes if they hurt farmer interests. Kohl and his party therefore had good reason to be reluctant to cross the farmers, as German farmers had habitually sanctioned the CDU/CSU in elections over agricultural policy.

To address the crisis, Germany and France each proposed price cuts of no more than 3% and a grain production ceiling of 165 million metric tons. This plan, however, would do little to address the actual problems plaguing the CAP, as the proposed ceiling would allow for a 6 percent increase in production over production levels that were already considered to be unsustainable. Only after reaching that point would production penalties be applied. In short, the Franco-German plan proposed little change to existing price supports with minor penalties, at best, for overproduction. It thus did little to address the budget problem. The UK, supported by Denmark, represented the opposite end of the spectrum on CAP reform. Given that British farmers were among the largest and most efficient in the Union, Prime Minister Margaret Thatcher viewed the CAP primarily as a means by which the UK was forced to support less efficient competitors. Just a few years prior to this reform, Thatcher had negotiated the UK rebate, essentially awarding the UK a refund for money they paid into the EU. The rationale for the rebate was that the UK got back from the EU far less than it paid in, with the CAP being the main cause. Thatcher proposed a 15% price cut for cereals in years in which production was in excess of an established ceiling and also advocated for a producer tax, called a co-responsibility levy, which would help defray the costs of export subsidies and surplus storage. Ultimately, the final agreement contained a 3% price cut for cereals, as France and Germany preferred, along with the co-responsibility levy that took effect only when cereal production exceeded 160 million tons . Beyond cereals, which was among the more contentious commodities, a system of production ceilings and co-responsibility levies was adopted for the other major crops. However, the ceilings were set so high, and the fines so low that no change in production practices would result. Ultimately, the reform did little to address the main budgetary issue, however, as it was estimated that “no savings would result until 1990, if at all” . Because of strong British and Danish resistance to contributing even more to an out of control budget, “Germany agreed to contribute an extra 5 billion ECU over a five-year period, representing a 30% increase in their net annual budget contribution” . For the most part, small adjustments were made to spending and revenues, while the basic logic of the CAP remained unchanged. The 1988 Stabilizer Reform did not attempt systemic reform, a change to the CAP’s fundamental paradigm of supporting incomes via incentivized production. Without an opening created by disruptive politics, this type of fundamental reform simply was not possible.The previous chapter examined the creation of the CAP and early efforts to adjust policies to correct increasingly evident problems. This chapter is the first of four empirical chapters examining the major rounds of CAP reform. Prior to the MacSharry Reform of 1992, the CAP had never undergone a major reform. While there had been minor reforms, none had altered the fundamental operation of the CAP. Instead, the fiercest CAP battles had pertained to the semi-regular negotiations over the setting of prices, particularly for core commodities like cereals and livestock. By the 1990s, one of the key goals of the CAP when it was created, to increase food production in order to make Europe food secure, had not only been achieved, but had since become a threat to the continued existence of the program. Minor adjustments to the CAP, as had been the norm in the past, would not suffice if the CAP was to be sustainable long term. The MacSharry Reform occurred when the CAP was in crisis. Linking farmer payments to agricultural output had created a production crisis which was quickly followed by a budgetary crisis. The current system was unsustainable. The CAP could no longer afford to pay farmers subsidies linked to production while also absorbing the financial cost of storing and dumping that production. Meanwhile, GATT Uruguay negotiations had ground to a halt and the clear culprit holding up an agreement that was badly desired by the European manufacturing and services sectors was European agriculture. The EU, under strong pressure from the French, staunchly defended the protectionist system of European agriculture, while the USled camp called for aggressive liberalization, including the elimination of income subsidies for farmers.