Category Archives: Agriculture

NK105 demonstrated efficacy in patients with advanced gastric cancer that failed to respond to chemotherapy

To our knowledge, only one PEGylated drug has been approved for veterinary applications. This is Imrestor, a PEGylated granulocyte colony-stimulating factor, which was approved in 2016 to increase the number of circulating neutrophils in cows and thus prevent breast tissue inflammation .Although PEGylated drugs have been successfully translated to the clinic, a growing body of literature has highlighted the increased presence of PEG-specific antibodies in the general population due to the extensive use of PEG in cosmetic and pharmaceutical products, correlating with the declining therapeutic efficacy of PEGylated active ingredients.This issue is being addressed by the development of alternative polymer-drug conjugates . In the agricultural industry, polymeric seed coatings are used to control pests and diseases that would otherwise inhibit germination and growth.Coating seeds increases their viability, reduces the risk of the active ingredient leaching into the environment, and minimizes off-target toxicity to other organisms compared to free pesticides. More than 180 coating formulations have been reported, including chitosan, polyvinyl acetate , polyvinyl alcohol, PEG, ethyl cellulose, and methyl cellulose.On the market, the majority of seed coating technologies have been developed by Bayer Crop Science, BASF, Corteva, Monsanto, Syngenta, Incotec/Croda, and Germains. Micelles are composed of amphiphilic surfactant molecules that spontaneously aggregate into spherical vesicles in an aqueous environment. This phenomenon is only possible if the quantity of the surfactant molecules is greater than the critical micelle concentration. The core of the micelle is hydrophobic and can sequester hydrophobic active ingredients. The size of the micelle and therefore the amount of active ingredient that can be loaded in its core is dependent on the molecular size, geometry, and polarity of the surfactant.

The small size of polymeric micelles reduces their recognition by scavenging phagocytic and inter-endothelial cells located in the liver and spleen, respectively,garden grow bags and therefore increases the bio-availability of the active ingredient.Most micelles are made of block co-polymers with alternating hydrophilic and hydrophobic segments, and the ratio of drug molecules to the block co-polymers determines their properties. Micelles are often composed of PEG, PLA, PCL, polypropylene oxide, poly-Llysine, or combinations of the above. Estrasorb was approved by the FDA in 2003 as a topical lotion, and consists of micelles designed for the transdermal delivery of 17β-estradiol to the blood for the treatment of menopausal-related vasomotor symptoms. This administration route evades first-pass metabolism, achieving stable levels of 17β-estradiol in the serum for 14 days. Furthermore, paclitaxel and docetaxel are commercially available formulated as micellar nanocarriers, thus avoiding the use of Kolliphor EL as a solvent.Various micellar nanocarriers are currently undergoing clinical trials . For example, NK012 is a micellar polyglutamate-PEG formulation covalently bound to the antineoplastic topoisomerase inhibitor SN-38 via an ester bond. SN-38 is slowly released from NK012 by the hydrolysis of the ester bond under physiological conditions, which increases the SN-38 half-life to 210 h. NK012 is undergoing clinical trials for the treatment of solid tumors, triple-negative breast cancer, colorectal cancer, and small-cell lung cancer.Similarly, the NK105 micelle is being investigated for the delivery of paclitaxel to breast cancer, gastric cancer, and non-small-cell lung cancer. NK105 polymers consist of PEG as the hydrophilic segment and modified polyaspartate as the hydrophobic segment.Genexol-PM is a micellar nanocarrier consisting of mPEG-block-D,L-PLA for the delivery of paclitaxel for the treatment of non-small-cell lung cancer, hepatocellular carcinoma, urothelial cancer, ovarian cancer, and pancreatic cancer.

Genexol-PM was shown to behave similarly to the FDA/EMA-approved nanocarrier Abraxane and has been approved for the treatment of metastatic breast cancer and advanced non-small-cell lung cancer in South Korea. NC-6004 is being investigated for the delivery of cisplatin to head and neck cancer as well as non-small-cell lung cancer. NC-6004 demonstrated a significant reduction in cisplatin-induced neurotoxicity and nephrotoxicity . Micelles are also being investigated for the treatment of cystic fibrosis, metabolic syndrome, psoriasis, and rheumatoid arthritis.In veterinary medicine, a randomized trial was initiated in 2013 to investigate the safety and efficacy of micellar paclitaxel for the treatment of dogs with grade II or III mast cell tumors .The micelle consisted of a surfactant derivative of retinoic acid . Dogs treated with micellar paclitaxel showed a three-fold higher treatment response compared to a control group receiving the standard-of-care drug lomustine. However, the FDA conditional approval of Paccal Vet-CA1 was withdrawn in 2017 by the manufacturer Oasmia Pharmaceutical AB to allow them time to study lower doses in order to reduce adverse effects such as neutropenia, hepatopathy, anorexia, and diarrhea. In a different application, a micellar vitamin E has been tested as an antioxidant in race horses undergoing prolonged aerobic exercise to prevent exercise-induced oxidative lesions, and maintained the general oxidative status to a healthy level for horses undergoing intensive training.Micelles have also been developed as promising nanocarriers for the encapsulation of pesticides, helping to prevent adsorption to soil particles. Examples include the micellar encapsulation of azadirachtin,carbendazim,carbofuran,imidacloprid,rotenone,thiamethoxam,and thiram.These formulations are still undergoing development and have been tested in vitro and in the field. Inorganic nanocarriers include natural and synthetic materials based on silica, clay, and metals such as silver, gold, titanium, iron, copper, and zinc. These nanocarriers are physiologically compatible, resistant to microbial degradation, and environmentally friendly, which makes them suitable for medical, veterinary, and agricultural applications. Even so, their use as nanocarriers has been somewhat overshadowed by their success in other medical applications.

In particular, metallic nanoparticles have been developed as theranostic and photothermal reagents, and for the treatment of iron deficiency. The first formulation approved by the FDA in 1974 was iron dextran for the treatment of iron deficiency. Eight more formulations have since been approved by the FDA or EMA . We do not consider these formulations as nanocarriers because the treatment modalities rely entirely on the nanoparticle itself without a cargo of active ingredients. However, metallic nanocarriers have recently been proposed in which the active ingredient is attached to the surface by physical absorption, electrostatic interactions, or conjugation.In particular, gold nanoparticles allow the conjugation of many biological ligands, including DNA and siRNA.Thus far, only one clinical trial has been carried out using metallic nanocarriers, namely spherical nucleic acid gold nanoparticles for the delivery of siRNA to patients with glioblastoma or gliosarcoma . More advanced metallic nanocarriers are under development, including particles that can respond to external triggers, such as light, magnetic fields, and hyperthermia to release their cargo in a controlled manner. For example, gold and silver nanoparticles have been conjugated to various cancer drugs.Mesoporous silica nanocarriers have been investigated extensively because they are stable particles with a high payload capacity due their porous structure, they have a tunable pore diameter , and surface modifications can impart new functionalities such as targeted delivery.MSNs have already been tested in the laboratory to deliver cancer drugs such as doxorubicin and camptothecin, antibiotics such as erythromycin and vancomycin, and anti-inflammatories such as ibuprofen and naproxen,tomato grow bag with remarkably high loading rates of up to 600 milligrams of cargo per gram of silica.This loading capacity of up to 60% far exceeds that of liposomal and polymeric nanocarriers. For example, the liposomal formulation Doxil and the polymeric formulation Eligard achieve loading capacities of 31% and 27%, respectively. However, some silica nanoparticle formulations have been shown to cause hemolysis due to strong interactions between silanol groups on the carrier and phospholipids in the erythrocyte plasma membrane.Another concern is their persistence in vivo due to the absence of renal clearance. These issues could be addressed by modifying the surface chemistry or applying coatings. In an agricultural context, silica is already highly abundant in soil and such particles could therefore be engineered for the controlled release of active ingredients without the carrier itself causing environmental harm. For example, MSNs have been used to deliver the insecticide chlorfenapyr over a period of 20 weeks, which doubled the insecticidal activity in field tests.The fungicide metalaxyl was also loaded into MSNs, allowing its slow release in soil and water for a period of 30 days.Similarly, nanocarriers based on naturally occurring aluminum silicates have been formed into phyllosilicate sheets for the intercalation of antibiotics and herbicides, allowing sustained delivery.Several metallic nanoparticles have demonstrated antimicrobial properties, and the EPA has already approved silver nanoparticles for use as an antimicrobial agent in clothing, but not yet for the delivery of active ingredients. Finally, carbon nanotubes are also being investigated for medical and agricultural uses because their shape and surface chemistry confer unique properties, although their toxicity remains a translational barrier. I recommend the following reviews for further information. Over the course of evolution, nature has yielded a variety of bio-materials with great structural complexity that remains difficult to emulate.

The analysis of such complexity requires the appropriate molecular methods, and for this reason the development of proteinaceous nanocarriers has lagged behind that of the simpler liposomal, polymeric, and micellar structures.The production of proteinaceous nanocarriers has also required the development of tools for the expression of recombinant proteins and strategies for creation or diversity, such as directed evolution, genome editing and synthetic biology. These tools have allowed the production of hierarchically organized proteinaceous structures, including albumin nanoparticles, heat shock protein cages, vault proteins, and ferritins.These comprise repeated protein sub-units forming highly organized nanostructures that are identical in size and chemical composition. Although synthetic nanoparticles can also be assembled into complex structures, the sophistication and monodispersity that can be achieved with proteins has yet to be replicated. Proteinaceous nanoparticles have been used as biocatalysts for the synthesis of novel materials, but are also useful for the delivery of active ingredients in medicine and agriculture.The first proteinaceous nanocarriers were developed to mimic the properties of plasma proteins, thus increasing circulation times and reducing systemic side effects. In 2005, the FDA approved the proteinaceous nanoshell Abraxane, consisting of albumin-bound paclitaxel for the treatment of breast cancer. The conjugation of paclitaxel to albumin stabilized the drug even in the absence of Kolliphor EL, and enhanced the uptake of the active ingredient compared to the Kolliphor EL formulation.Given the safety and efficacy of drugs conjugated to albumin, two other albumin nanocarriers are undergoing clinical trials . The first is an albumin conjugate of the protein kinase inhibitor rapamycin indicated for colorectal cancer, bladder cancer, glioblastoma, sarcoma, and myeloma.The second is an albumin conjugate of docetaxel indicated for the treatment of prostate cancer. Albumin has a long circulation half-life due to its interaction with the recycling Fc receptor. It is beneficial for the delivery of small molecules that are unstable or have low solubility in blood, as well as proteins and peptides that are rapidly cleared from the circulation. Small molecules can be chemically fused to albumin and administered as conjugate, and strategies to target small-molecule drug cargoes to albumin in vivo have also been developed.Heat shock protein cages, vault proteins, and ferritins have also been investigated for the delivery of active ingredients, although no clinical trials have been reported thus far. Heat shock proteins are chaperones that promote the folding of newly synthesized proteins and the refolding of denatured ones, which means they are naturally stable and possess channels and cavities for the sequestration of cargo.There are five families of heat shock proteins: Hsp100, Hsp90, Hsp70, and Hsp60 , and the small heat shock protein family, ranging in size from 12 to 43 kDa. Heat shock proteins assemble into large complexes that vary in size and shape , and they can be engineered to carry and deliver active ingredients such as doxorubicin.Vault nanoparticles are barrel-like ribonucleoproteins found in many eukaryotes. They are 41 x 73 nm in size and resemble the vault of a gothic cathedral. Their precise biological function remains unknown, although they are thought to play a role in nuclear transport, immunity and defense against toxins.Several proteins have been encapsulated in vault nanocarriers, including the lymphoid chemokines CCL19 and CCL21, the New York esophageal squamous cell carcinoma 1 antigen, the precursor of adenovirus protein VI , the major outer membrane protein of Chlamydia trachomatis, and the egg storage protein ovalbumin.Vault Pharma is one company specializing in the development of these structures. Finally, ferritin is an iron-storage protein with 24 subunits that self-assemble into a spherical cage structure 12 nm in diameter with a molecular mass of 450 kDa.

There is also geographic variation in lactase persistence phenotypes that complicate the pattern here

Although there is evidence for early and rapid domestication of pigs in the lower Yangtze , the adoption of domesticated animals commonly used in dairying did not occur until the late Holocene . The prevalence of lactase persistence phenotypes within China remains low today , although there are slightly higher frequencies in the North . The evidence for long-term dietary change within South Asia is particularly complex with considerable spatial and temporal variation. The earliest pottery and domestic rice are present by 9 kya, but evidence for significant sedentary villages and agricultural dependence occurs only after 4 kya following the mid-Holocene movement of crops from both the Western Eurasia and China . There is evidence for the independent domestication of cattle in the Indus region ca. 7 kya and convergent evolution for lactase persistence in South Asia, with the highest frequencies in the northwest parts of the region, but very low frequencies in southern and eastern areas of the Indian subcontinent . It is also notable that Indian pastoralists maintain greater stature than higher caste individuals, which has been attributed to milk consumption . The question of body size variation as a reflection of diet and health in the past has been of long-standing interest to bio-archaeologists. While documented declines in Neolithic estimated statures have been linked to lower predicted statures based on genetics , adult body size also reflects developmental plasticity and life history variation . Stature itself is not really a trait, but rather a consequence of growth, which ultimately reflects variation in strategies for energy allocation throughout development. Body mass likewise indicates investment in lean and fat tissue, although unlike stature, these can respond to ecological stresses through adult life. Improved growth is generically a good marker of health because many aspects of somatic maintenance benefit from better growth in early life, plastic plant pot whereas defense against pathogens and early reproduction reduces energy for linear growth and lean tissue deposition.

Applying a life history perspective to growth provides insights into the likely role of infectious disease and pathogens in reductions in stature in prehistory, as there are multiple routes to generate the adult phenotype which extend beyond diet but also include allocation of energy to immune function or reproduction, potentially mediated by fat deposition .A comparison of trends in stature across the past 10 kya in other regions is presented in Fig. 3. Southern Europe is characterized by a general decline between 10 and 6 kya, followed by relative stability through the mid-Holocene . In Central Europe, there is a marked and significant increase in male stature between 8 and 5 kya, and a general increase in female stature across the same time frame . Both males and females in Northern Europe are also characterized by a general increase in stature from 7 kya, with males peaking ca. 3 kya and females ca. 2 kya . Stature trends across the same time frame in the Nile are more variable and show no specific long term trends, while in China statures are generally consistent throughout the Holocene, except for a decline among females after 3 kya . A contrasting pattern is observed in South Asia where there is a significant decline in both male and female stature throughout the Holocene. A regional comparison of Holocene body mass trends illustrates a consistent pattern of initial decline in Southern Europe followed by a period of relative stability. In Central Europe, there is a general increase in male body mass between 4.5 and 2 kya, while female mass is relatively stable throughout the Holocene. In Northern Europe, there are early Holocene declines in both male and female body mass that reach their low point approximately 5 kya and are followed by increases by 2 kya. When these patterns are contrasted with other regions, we see relatively little change in the Nile Valley, while in South Asia male body masses increase in the first half of the Holocene in a period where female mass appears to decline. Here, estimated male masses fall considerably after 4 kya.

In China, there appears to be relative stability in body mass in the early part of the Holocene, followed by increases among males from 5 to 2 kya, and among females from 3 to 1 kya. Noting that the most significant long-term increases in stature occur in Central and Northern Europe where there is evidence for strong selection acting upon lactase persistence during the mid-Holocene, we consider specific trends in sub-regions of Northern Europe, Britain, southern Scandinavia, and the eastern Baltics, over the past 8,000 y . In Britain, there is relatively minor and non-significant temporal variation in stature through time, while male body mass generally increases from ca. 5 to 2 kya. In Scandinavia, there are marked and significant increases in male stature between 7 and 4 kya.Body masses in the region are consistent among early Holocene males but females show a decrease through the mid-Holocene. Both sexes show increases in body mass between 5 and 2 kya, but the trend is more pronounced among females. In the Baltics increases in stature are expressed in both males and females between 6 and 2 kya while body masses are relatively consistent throughout the Holocene. To investigate the spatiotemporal patterning of body size variation throughout Europe in greater detail, we generated heat maps of mean statures and body masses . The results demonstrate fairly uniformstature across Europe before 10 kya and a general decline between 10 to 6 kya, followed by increases that are most pronounced in Northern Europe and Southern Scandinavia. Body mass trends follow a broadly similar pattern, with much of Europe characterized by estimated body masses above 65 kg before 10 kya, followed by declines in much of Western Europe through to 6 kya. Increases in body mass are observed in Central Europe from 6 to 4 kya and across most of Northern Europe from 4 kya to the present. The period from 10 to 6 kya is predominantly before the transition to agriculture in central and northern regions but includes hunter-gatherers, farmers, and others with variable or transitional subsistence strategies, suggesting that further analyses with an expanded dataset are required to contextualize this trend.

In this study, we investigated long-term trends in human stature and body mass relative to late Pleistocene and Holocene cultural change in seven different regions. We analyzed data by chronological and geographical information rather than cultural labels, given the significant spatiotemporal and regional variation in cultural characteristics attributed to terms such as the Neolithic, opting instead to discuss the broader timescales upon which the process of the transition to domesticated plants and animals was enacted. The results demonstrated that in most regions body size decreased before the earliest manifestations of agriculture, regional patterns of phenotypic variation over time are variable, and this spatiotemporal variation in stature and body mass is not directly associated with the onset of the Neolithic. Given their timing, these trends cannot simply be explained by subsistence changes related to the reliance on domesticated plants and animals. We also noted recent phenotypic diversification that is most pronounced in the last 2,000years,nursery pots which requires further study but may stem from a combination of demographic expansion, genetic diversification, and socio-economic inequality. It is worth noting that the long-term trends in the Levant, where the earliest transition to agriculture was observed as a complex process over millennia , demonstrated relatively stable stature and body mass trends over time. The Levant is a region characterized by long-term population continuity and the in situ domestication of numerous species of indigenous plants and animals over an extended period of the terminal Pleistocene and early Holocene . The transition to agriculture in this region represented a long period of mixed hunting and gathering and cultivation of crops and domesticates that were well adapted to local environmental conditions. Similarly, there was no significant change in stature through time in China after plant domestication, and an increase in body mass among males during the later Holocene. This is a region that is also characterized by population continuity, local domesticates, a very long period of mixed foraging and farming rather than an abrupt agricultural transition, and high levels of environmental productivity. It is important to note that our approach to comparing population trends by region may confound local impacts of migrations and gene flow, such as the well documented increase in steppe ancestry among northern Europeans, which may have influenced north-south gradients in human stature , and similar population movements in other regions likely influenced the complexity and timing of cultural and phenotypic changes. In South Asia, for example, we noted long-term reductions in stature and body mass throughout the Holocene. The region, however, exhibits a high degree of ecological diversity and is characterized by the adoption of different domesticates that originated in East Asia, Western Asia, and Africa in different regions of the Indian Subcontinent.

Similarly, in the Nile Valley, another region characterized by the adoption of plant and animal domesticates from other regions, results are highly variable and likely confounded by the complexity of migration history in the region. At present, there are insufficient data to match aDNA evidence for ancestry with direct phenotypic measures on the broad scale presented in this paper. However, it is likely that underlying genetic variation and changes in the sociocultural environment, including diet, underpin phenotypic change. Further research will be required to clarify long-term spatiotemporal trends in phenotypic and genetic variation. We also aimed to test the LGH by determining whether the geographic and temporal timing of selection for LP phenotypes is associated with increases in stature and body mass. The most significant mid-Holocene increases in stature and body mass occurred in Northern Europe between 7 and 4 kya and these were preceded by increases in stature in Central Europe that occurred between ∼8 and 5 kya. These regions are linked in providing evidence for mid-Holocene selective sweeps in genetic variants associated with LP, providing preliminary support for the LGH. Within Northern Europe, modest increases in body mass were noted in Britain among males, but the most significant trends toward increased stature and body mass were found in the Baltic and southern Scandinavian regions. Heat map results demonstrate how the current patterns of stature and mass variation in Europe were established throughout the mid to late Holocene. While size increases were noted in regions where there is evidence of natural selection in response to dairying, we noted different trends among males and females with more significant increases in stature generally expressed among men and more significant variation in body mass among women. We suggest this is explained by greater plasticity among men, particularly in stature, in response to environmental and cultural fluctuations, while women’s phenotypic variation is better able to buffer environmental stress via sexual dimorphism in body mass that reflect lifelong differences in energetics and somatic investment . There is evidence that males show greater stunting in response to early life under nutrition , which would lead to greater variation in adult male statures. While skeletal methods of body mass estimation do not generally reflect late-life accrual of body mass , both lean mass and fat mass are components of maternal fitness, and substantial variability in these tissues emerges prior to reproduction, suggesting that body mass variation is more directly linked to female fitness than stature . Phenotypic plasticity may have also been expressed most strongly late in development, where IGF-I factors in dairy milk may have directly fueled growth differences and sexual dimorphism . In general, we note that while the timing of size increases corresponds with selective sweeps in lactase persistence , it is unclear whether phenotypic variation reflects underlying genetic variation or whether phenotypic plasticity precedes later genetic adaptation, but there is growing evidence that the latter is an important mechanism of adaptability . Overall, our results provide provisional evidence for greater phenotypic stability in regions of in situ domestication and where the transition to agriculture was gradual over millennia. The dispersal of farmers into novel environments where foreign domesticates may have struggled to establish appears to have led to greater phenotypic diversity in human populations.

More importantly the F-statistics demonstrate that the instruments have sufficient power

We find no evidence that the dams included in the sample are more or less likely to be used for irrigation purposes or to supply water to cities. However the excluded and included dams differ in terms of their height, the size of their reservoir, their capacity, and their average capacity lost due to sedimentation. These results suggest that our inclusion criteria is somewhat biased toward larger dams that can retain more water, but it is uncertain whether this is likely to introduce bias in our analysis in terms of dam performance and its impact on child nutritional status. In their influential paper Duflo and Pande use Indian districts as their unit of analysis and proceed to identify which areas are upstream and downstream from each other. However it is unclear whether one can apply this strategy in Africa. In particular, visual inspection of administrative regions in Africa reveals that the borders of many regions run at least partially along rivers, see for instance the case of the Southern African tip in Figure 1.3. As a consequence many regions contain both the catchment and the command area of a dam. Strobl and Strobl propose an arguably superior spatial breakdown in terms of upstream and downstream relationships that is based on actual river flow data. The U.S. Geological Survey Data Center has developed a geographical database, the HYDRO1K, providing a number of derivative products widely used for hydrological analysis. They use the drainage basin boundaries data from the HYDRO1K which divide the African continent into 7131 6-digit drainage basins with an average area of 4200 km2 . More critical and important for our analysis,plastic flower buckets wholesale the database assigns to each basin a code that allows one to determine whether it is upstream, downstream or not related to another basin.

Figure 1.4 depicts the spatial breakdown of the African continent according to our 6-digit basins. For comparison the figure depicts these jointly with the outline of the country borders. Basins vary greatly in shape and size, with a large number crossing national borders. Figure 1.5 depicts the 6-digit basins and the outline of administrative regions in the Southern African tip. The figure confirms that even at the sub-national level there is little correspondence between administrative regions and 6-digit basins. A similar picture emerges for the Southern Indian region where there is no obvious correspondence between administrative regions and 6-digit basins, see Figure 1.6. The main challenge for estimating the effect of dams on child nutrition is that dams are unlikely to be randomly allocated across regions, leading to a serious endogeneity problem . Moreover with a cross-section of 6-digit river basins, we are unable to control for invariant basin characteristics that influence dam location and are correlated with child nutrition, a strategy that would attenuate the endogeneity problem. In their study of Indian dams, Duflo and Pande use the share of dams in a state prior to their period of analysis interacted with a district’s suitability for dam construction based on the district’s river gradient to construct an estimation of the number of dams in each district. They then use these estimated number of dams as instruments for the actual number of dams in a district. In this paper we implement an instrumental variable strategy developed by Strobl and Strobl who modify Duflo and Pande’s approach along several dimension. Strobl and Strobl use the fact that starting with European colonization, a number of treaties were signed between African states to clarify the management of water resources.

Treaties, especially those signed in the colonial period, focused on the division of water resources or encouraged the construction of dams. For instance Lautze and Giordano note that about three quarters of the treaties cited as a goal the construction of dams for hydropower purposes and/or to expand the area of irrigated land. Strobl and Strobl use the fact that every country on the African continent has territory in at least one treaty basin. In the HYDRO1K data set, treaty basins correspond to 1-digit and 3-digit Pfaffstetter code classification and cover 60 per cent of Africa’s total land area. Starting with European colonization, a number of treaties were signed between African states to clarify the management of water resources. Treaties, especially those signed in the colonial period, focused on the division of water resources or encouraged the construction of dams. For instance Lautze and Giordano note that about three quarters of the treaties cited as a goal the construction of dams for hydro power purposes and/or to expand the area of irrigated land. To construct the relevant geographical delineation of the policies influencing dam construction, Strobl and Strobl use two databases. The first is the International Freshwater Treaties Database, which provide a comprehensive collection of international freshwater related agreements since 1820, including summaries of these as well as coding them according to the year signed and the river basins and countries involved. The second is the database on historical formation of treaty basin organizations in Africa compiled by Bakker. Combining these two databases reveals a total of 98 treaty basin organizations that were formed and which involve 53 countries and 59 river basins since 1884. Figure 1.7 depicts these treaty basins. As emphasized earlier the treaty basins are clearly transnational, cutting generally across several countries. Moreover their size, ranging from 1-digit to 3-digit Pfaffstetter code and their potential extent of coverage is at a substantially larger scale than the individual regions that we use as our unit of analysis, 6-digit Pfaffstetter code. In this paper we use this specific policy context and the approach in Duflo and Pande to develop an instrumental variable strategy to estimate the effect of dams on child nutrition.

As in Duflo and Pande we use the fact that a 6-digit basin’s suitability to dams should influence the number of dams built in the basin relative to other 6-digit basins in the same treaty basin. More specifically we interact a 6-digit basin’s river gradient and the proportion of dams in the treaty basin it falls into as an instrument for the number of dams in the 6-digit basin. As such we only rely on within treaty basin differences in suitability to dams to estimate the effect of dams on child nutrition. Moreover, in the African context, an important distinction needs to be made between perennial and ephemeral rivers, where the former’s flow is continuous, but for the latter water only flows for part of the year. Ephemeral rivers tend to be located in the dry lands of Africa and are much less suitable for dams, see Seely et al.. For instance, to intercept a large volume of water, a dam on an ephemeral river must be large in relation to the average inflows, but such dams are under high risk of failure because of the unpredictability of flash floods. Nevertheless because of the lack of sufficient perennial water sources,black flower buckets many countries rely at least in part on ephemeral rivers for dam location as well. For example, in Namibia only 10 per cent of the population rely on perennial rivers for their livelihood, and only 3 of the 19 major dams of the FAO database are located on those rivers. Treaty basin fixed effects ηb control for time-invariant characteristics that affect child nutrition, which are correlated with the likelihood of dam construction allowing us to only use within river basin and cross sub-basin variations for identification. However even in this situation, there might be unobservable determinants of child nutritional status that are correlated with the incidence of dam construction. In this case OLS estimates of the effect of dams will be biased. For instance if sub-basins where households are relatively richer are more likely to receive dams then the OLS estimate of β1 will be biased upward while the OLS estimate of β2 is likely to be biased downward. As in Duflo and Pande, we use the non-monotonic relationship between river gradient and the incidence of dam construction to implement an instrumental variable strategy. The approach consists in using exogenous variation in geographic features of different river basins to estimate the number of dams in a sub-basin. These estimated number of dams are then used to instrument for the actual number of dams. We construct measures of of a sub-basin geography such as elevation and river gradient using topographic information for multiple cells in each river basin. These information are used to compute the fraction of each sub-basin in different elevation categories and the fraction of a river basin falling into four gradient categories. Lastly to compute river gradient we restrict to cells in a sub-basin through which a river flows and compute the fraction of area in the above four gradient categories. Our panel on dam construction allows us to use all the information available to estimate the number of dams in a given sub-basin located in a river basin at certain points in time. Three sources of variation are used to predict the number of dams in a sub-basin: differences in dam construction across years in Africa, differences in the contribution of each each river basin to the increase in dams built, and differences across sub-basins driven by geographic suitability. First, we show that the river gradient matters for dam location. As a first step we regress the number of dams in 2000 on the fraction of river gradient in each gradient category by type of river, the average gradient in the 6-digit basin, river length by type of river, total area of the basin, and treaty basin fixed effects. We only show the coefficients on our main variables of interest.

The results of this analysis are reported in Table 1.3, columns and , and are consistent with Duflo and Pande’s finding for perennial rivers: moderate gradients in perennial rivers are more likely to be associated with dam construction. We also find that high gradients are less likely to be associated with dam construction. For ephemeral rivers we find that moderate and high gradients are less likely to receive dams. One possible explanation for this is that ephemeral rivers tend to require wider water flow for dam construction and tend to be less steep than perennial rivers. Moreover many of the dams with water supply purpose tend, in our data, to be located on low gradient ephemeral rivers. We also estimated the model in the sample of dams with irrigation as one of the major purpose and find qualitatively similar results. Overall these results provide support for using river gradients calculated separately for perennial and ephemeral rivers as predictors of dam construction. Next we report in columns and of Table 1.3 the estimated coefficient of RGrjks×Dbt from the first step regression in the pooled sample over all years. Column shows the results for all dams while column reports the results for dams with some irrigation purpose only. The results for perennial rivers are overall similar to the cross-sectional results. For ephemeral rivers we find that as the share of dams in the treaty basin increases, additional dams are less likely to be built in 6-digit river basins with very small river gradients . Table 1.4 presents estimates of the effect of dams on the nutritional status of children. Panel A provides Feasible Generalized Least Squares estimates, and panel B Feasible Optimal IV estimates. The coefficient on “own dam” captures the impact of dams built in that 6-digit river basin, while “upstream dam” measures the effect of dams in upstream 6-digit river basins. In this table each row corresponds to a separate regression; row 1 and 3 present estimates where the dependent variable is height-for-age z-score or an indicator equal to one if a child’s height-for-age z-score is below -2 points of standard deviation; while row 2 and 4 present estimates using weight-for-age z-score or an indicator equal to one if a child’s weight-for-age is below -2 points of standard deviation. The models in columns 4 to 6 and 10 to 12 are estimated using a linear probability mode. In columns 2 to 6, the analysis is restricted to dams with irrigation as one of their main purposes, while in columns 7 to 12 we include all dams.

Survey data show a very close relationship between information value from and trust in an organization

These results corroborate with previous studies demonstrating that ecological and moral concerns matter in farmer decision-making, and that motivations are not exclusively profit-driven . The later statement seems intuitive—growers would hope policymakers would include a diverse range of perspectives into their decisions, especially in light of growers’ sentiments on a lack of stakeholder participation during the updated waiver. Interestingly, one issue that more farmers agreed with in 2006, yet more respondents disagreed with in 2015 was that “management practice requirements of the Agricultural Waiver are fair to growers.” As described in Chapter 3, fairness was a hotly contested issue in the 2012 Agricultural Waiver negotiation process, spanning a number of equity issues from the types of BMPs required to the cost and unequal burdens of tiered mandates. This finding is another testament to farmers’ increasing frustration with the Ag Waiver process and mandates, as alluded to by the Farm Bureau. The final series of questions in the survey asked growers about their trust and communication with other groups and water quality agencies as well as the value of information they received from those organizations . In both years, environmental groups were the least trusted and had the least contact frequency, whereas other farmers were the most communicated with but not necessarily the most trusted. Results from a Pearson’s correlation test between information value and trust found a strong positive relationship between the two variables, the coefficients were close to a perfect positive relationship , only varying between 0.80 and 0.99. While data from this survey is not sufficient to test a causal relationship, for example, if the quality of information from a given agency influenced feelings of trust, however,procona valencia buckets these results do substantiate the institutional rational choice model’s belief that there is indeed a strong relationship between information and trust There also appeared to be a close positive relationship between the amount of communication, trust and information value associated with a given organization .

These results support the body of literature on the connection between trust and contact frequency. Interestingly, results show a few exceptions to this trend, just as they did in Lubell and Fulton’s study. Growers reported a dip in trust despite more communication in relationships with a few different organizations, all of which had regulatory roles, including the Regional Board and Preservation, Inc., and to a lesser extent, the County Agricultural Commissioners office. These cases could be examples of the “institutional distance” phenomenon , whereby regulators might have a higher frequency of contact with growers, but a physical distance prevents face-to-face communication and/or centralized decision making making the institutional distance greater. Another possible explanation for the dip in trust despite more communication could be due to different values and interests between growers and regulatory agencies, as described by the Advocacy Coalition Framework . These different interests could also help explain the low scores on trust for the other group that might be perceived as having very different view and interests than growers—environmental groups, which scored 3.6 out of 10 in 2006, and 2.8 in 2015. Despite these exceptions, a more in depth look at the association between trust and communication confirms a strong relstionship between the two variables for most non-regulatory agencies. The 2015 survey results show that there was a significant improvement in the amount of trust when a grower had contact with an organization compared to when it did not have any contact with that group . The only two exceptions to this trend were farmers’ relationships to the Regional Board and farmers’ relationships to other farmers. In both cases, trust did not significantly improve with contact, perhaps suggesting that the complex historical relationships with these two polarizing groups—the group regulating farms and the group most aligned with your values —overshadows factors such as contact frequency when measuring trust. To test the Farm Bureau’s observation of trust decreasing between the two Agricultural Waivers, mean trust in an agency were compared side by side for the two surveyed years and significance was tested in a two-tailed t-test . Results show that trust in the Regional Board decreased significantly between 2006 and 2015. Yet despite the significant decline, the mean trust scores for the Regional Board were relatively close between the two surveys .

Another group that experienced a significant decrease in trust over this time period was environmental groups . While the information from the survey is not comprehensive enough to verify a causal relationship between decreased trust and the two Ag Waivers, the significant decrease in trust over time does give credence to the Farm Bureau’s concern about growers’ declining relationship with the primary regulatory agency, the Regional Board. Interestingly, one group that might have been expected to gain trust from growers between the two surveys, but did not, was Preservation, Inc. Created in 2004, Preservation, Inc. was still little known during the first survey, but by the second survey, the agency was providing valuable services to the vast majority of growers. One possible explanation for the unchanging trust in the primary monitoring agency despite more communication was that their core values differed substantially, heavily swaying growers’ perception of the agency. Finally, a subset of responses from the third set of questions, opinions on water quality management practices, and a subset of responses related to trust from the fourth set of questions, were assessed for correlatation, with a particular attention to trust in the Regional Board. Findings suggest that trust in the Regional Board is associated with growers’ opinions on water quality management practices . Trust in the Regional Board was greater among growers who agreed or strongly agreed with statements related to the fairness, effectivness and success of water mangement practices mandated in the Ag Waiver. Trust in the Regional Board was lower among growers who disagreed with these statements. These last set of findings are intutive, given previous research on trust being a function of aligning core beliefs between two groups. As Lubell states “People will trust actors who they believe have very similar beliefs and interests to their own, and their trust will decline as the difference in policy-core beliefs increases.” Growers trusted the Regional Board more when they agreed or strongly agreed with the Regional Board’s decisions and opinions on water quality practices, and growers trust in the Regional Board declined when they disagreed or strongly disagreed with the BMP provisions implemented in the Ag Waiver. Interestingly, there is a stronger correlation between those growers that “agreed” with statements than than those growers that “strongly agreed, ” perhaps indicating a threshold or a range at which growers trust is correlated with beliefs.

Previous research shows that repeated, face-to-face communication is a promising tool to bolster trust between water quality agencies and growers, as well as to alter attitudes relating to water quality management practices. Prior studies also demonstrate that other factors, such as historical relationships, core values, and institutional distance can act as equally strong forces in influencing trust, undermining the significance and value of communication between policy stakeholders . Results from this study corroborate with this literature. Growers’ trust in the majority of regional agricultural and water quality groups were closely correlated with the amount of communication as well as the value of information they received from that group. However,procona buckets growers’ trust in a few agencies, all with regulatory arms, did not correlate with contact frequency or information value. This was true in 2006, but much more so in 2015, and this was particularly true of growers trust in the primary regulatory agency, the Regional Board. These findings suggest that growers’ frequency of contact with the Regional Board, which increased between 2006 and 2015, did not relate to trust in the regulatory agency, which decreased between 2006 and 2015. These results do not suggest, however, that communication with regulatory agencies altogether does not matter. Rather, communication could play an important role in trust-building relationships, as suggested by the literature, but more research is needed into the types of communication utilized by the Regional Board, how communication has changed over time and how it might influence relationships with the regulated group. Preliminary research from a document review, discussed below, demonstrates that communication patterns are becoming more institutionally distant and deserves more research attention. While contact frequency with the Regional Board was not correlated to trust, opinions of water quality practices were. As the last set of findings illustrate, in 2015 there was a positive relationship between growers’ trust in the Regional Board and their opinions on water quality managemnt deicisons. These results cannot confirm causation—that trust leads to a convergence of beliefs, or a convergence of beliefs leads to trust; however, prior studies suggest the later. To build trust when two rival political actors do not hold the same views is not a simple task, espcially because core beliefs can be culturally embedded or shapped by historical events. However, building trust between adversaries is not impossible and should begin by achieving agreement on, at very least, empircal issues with sound evidence. Leach and Sabatier offer a few ways to undertake this process: a “professional forum” exposing scientific evidence from competing coalitions mediated by a neautral facilitiator , starting negotiations with a period of “joint fact finding” and consensus-building on the basic dimensions of the various problems , and/or pursue empathy-building exercises such as field trips . Another aim of this study was to examine anecdotes from the Farm Bureau regarding declining trust and collaboration between farmers and the Regional Board over the course of the two Ag Waivers.

While encouraging accounts of a working, collaborative relationship between growers and the Regional Board during the first Agricultural Waiver are difficult to substantiate from the survey responses, results from this longitudinal study as well as further evidence from agriculture testimonies do confirm that what rapport remained after 2004 was markedly soured during the next round of negotiations. There was a significant drop in trust between the two Agricultural Waivers, and growers reported to be more frustrated by the policy process during the second Ag Waiver—the majority agreeing that regulations were “unfair” and “too tough” despite their perceived efforts in adopting water quality management practices and their desire to be involved in the policy process. These results are somewhat contrary to literature that assumes “trust ought to be correlated with the length, depth, and recency of past collaboration” ; since only eight years prior to the follow-up study, farmers and the Regional Board joined efforts to pen the first ever regulatory program for agricultural water quality in the Central Coast. Why did trust degrade over this time period? And what lessons might be learned for future Agricultural Waiver negotiations? One somewhat fatalistic explanation for the waning relationship between farmers and the Regional Board is that the decline was inevitable. Comfortable with the 2004 provisions that they had collaboratively designed, growers were frustrated by the idea of increasing mandates. Unavoidably, the 2004 Ag Waiver was going to be made tougher—scientists, the State, and the public demanded that the Regional Board act on the growing evidence that water quality was not improving. This first explanation has dismal implications for future Ag Waivers since it assumes that little could have been done to save a relationship that was fleeting and inevitably going to decline. A second, more plausible theory is that the approach the Regional Board staff took during the drafting of the second Ag Waiver, beyond simply increasing mandates, tainted relations. The first Agricultural Waiver took a softer, collaborative and educational approach, slowly easing the agricultural industry into water quality regulations. Whereas negotiations for the second Agricultural Waiver came out of the gates strong, proposing a very tough 2010 Draft Order that took a more centralized approach, categorizing farms into set tiers with coupled mandates, bringing individual monitoring into the fold for the first time and required certain blanket provisions for all farms. Several agricultural interests claimed the new regulatory program was “the most rigorous in the state” . Although the new waiver was significantly watered down by the time it passed in 2012 and was ratified by the State Board in 2013, the policy process leading up to the 2010 proposal greatly strained rapport, opening a rift between growers and the Regional Board that would be difficult to restore during that round of negotiations.

Theory and experience suggest that the most successful pollution prevention tools are performance-based

In the U.S. and Canada, point source dischargers must obtain permits to release emissions, whereas non-point source dischargers largely remain uninhibited by federal mandates . In these WQT programs, point sources trade with other point sources to avoid costly discharge reductions at their industrial facilities, and only a handful of non-point sources are involved on a voluntary basis . On the limited occasions that the agricultural industry does engage in trading, farm non-point sources almost always assume the roll of “sellers” in the program, rather than “buyers” . Under such circumstances, point source dischargers pay non-point sources to comply with water quality standards , creating a profit-making opportunity for agricultural pollution This lopsided relationship between point and non-point sources highlights another related problem: the absence of a fully capped trading system. Though trading schemes show promise in transitioning the regulatory framework from individual discharge limits to river basin management based on group controls, for the system to realize its full potential, all dischargers—point and non-point—must participate . A further complication, both in partially- and fully-capped WQT systems, is that of accounting for differences in emission loads between point and non-point sources. WQT programs utilize a trading ratio to calculate how many units of estimated non-point source loadings should be traded with a unit of point source loadings . Because of the uncertainty of non-point source loadings, trading ratios are almost always set at 2:1 or greater to create a margin of safety . In this scenario, point sources must purchase two units of estimated non-point reductions for every unit of excess emissions. Interestingly, a study on trading ratios found that political acceptability, rather than scientific information, determined ratio calculations . Despite the challenges,blueberry in pot several notable successes have demonstrated that enforced group caps, emission allocations, and water quality standards can be met.

For example, in 1995, farmers from the San Joaquin Valley, California implemented a tradable discharge permit system to enforce a regional cap on selenium discharges. The selenium program set a schedule of monthly and annual load limits, and imposed a penalty on violations of those limits . In Canada’s Ontario basin, a phosphorus trading program was established in which point sources purchase agricultural offsets rather than update their facilities . A third-party, South Nation Conservation, acts a facilitator, collecting funds from point sources and financing phosphorus-reducing agricultural projects. It is estimated that the program has prevented 11,843 kg of phosphorus from reaching waterways . Numerous other pilot trading projects show promise, but need a serious overhaul if they are to realize their full potential. One prominent example worth mentioning is the U.S.’s Chesapeake Bay Nutrient Trading program. In response to President Obama’s executive order to clean up the Chesapeake Bay, the largest estuary in North America, the six states contributing pollution to the Bay are in the national spotlight as they figure out how to achieve pollutant allocations. Currently, their plans to meet water quality requirements are falling short . Economic scholars contend that a nutrient trading plan could offer the most cost-effective means for complying with the looming TMDL. But, uncertainty about agricultural sources willingness to participate and what trading ratio is most appropriate as well as high transaction costs remain issues . The most traditional form of command-and-control regulation is performance standards. Though often presented as an alternative to market-based approaches, performance standards can complement a tax or emissions-trading system, and can also be used alongside positive incentive schemes. In an incentive approach, if pollution exceeds a standard then a financial penalty or charge might be triggered, whereas if a farmer is well within compliance, the farmer might receive a positive payoff for their efforts. Standards can also be used in trading through pollution allowances with enforceable requirements .

And in a mandate scenario, standards are compulsory, and may or may not be accompanied with other motivating devices .Performance standards have successfully reduced point source water pollution—E.U.’s IPPC Directive and U.S.’s NPDES program and pollution of other media . Unfortunately, the same suite of challenges—the use of proxies, costs of monitoring and modeling, and uncertainty of environmental outcomes—face performance standards within the context of non-point source abatement. These perceived obstacles have largely precluded the use of performance tools for agricultural NPS control . However, a growing body of literature expounds the benefits of using performance approaches for this industrial sector . Performance measures are used to encourage Best Management Practices . Using models to predict the level of BMP performance can provide powerful decision-making data to farmers, helping them make appropriate management decisions . Performance modeling is most effective when conducted at the field-scale. For example, the Performance-Based Environmental Policies for Agriculture initiative found that the implementation of BMPs, such as changing row directions or installing buffer strips, reduces the risk of pollution to varying degrees depending on several on-farm factors . Allowing farmers to exercise site-specific knowledge in an individualized context highlights an important, laudable feature of performance-based approaches: flexibility . Some suggests that practice-based tools, ones that mandate or incentivize the installation of certain BMPs, are not as cost-effective as their performance-based counterparts . This is largely due to the fact that performance-based instruments provide flexibility to choose the practices that will achieve water quality improvements at the lowest cost .In the case of agricultural water pollution, farmers are the predominant actors targeted for compliance. While logical, since farmers’ management practices influence the amount of pollution that reach nearby water bodies, however it is worth noting that other actors involved in the pollution process could be targeted for regulation.

For example, the control of pesticides has been managed by regulating the chemical manufacturer, imposing mandates or taxes on chemicals sold on the market . This type of tool could be highly effective in reducing the amount of pesticides or fertilizers produced, sold, bought, applied and discharged into water bodies, creating a ripple effect through the whole production stream. Targeting actors further “upstream” is illustrative of what Driesen and Sinden call a the “dirty input limit” or “DIL.” Manufacturing companies are only one of several points along the production stream where the DIL approach could be effective; alternatively, pollutants could be controlled at the point of application. As suggested by the authors, the DIL approach is useful beyond the tool choice framework in that it provokes a new way of thinking about environmental regulation. Among the least invasive , but most important instruments for successful NPS management, capacity tools provide information and/or other resources to help farmers make decisions to achieve societal and environmental goals. Capacity tools are typically associated with voluntary initiatives rather than mandates . Because it can be difficult for farmers to detect the water quality impacts of their practices visually ,plastic planters wholesale learning and capacity tools become an invaluable means of conveying information to farmers. Farmers’ perceptions of the water quality problem and their role in contributing to pollution are one of the most influential factors in changing farming management practices . In California, the Resource Conservation Districts, University of California Extension, and the University of California’s Division of Agriculture and Natural Resources are examples of local government agencies providing capacity building services that include knowledge, skills, training and information in order to change on-farm behavior. In summary, each policy tool possesses strengths and weaknesses, which need to be taken into consideration when developing more effective ways to control agricultural pollution. An integrated approach, one that utilizes a diversity of policy instruments to address water quality issues in agriculture, is required. River basin management plans , or the “watershed approach” as it is often referred to in the U.S., can more appropriately tailor their choice of policy tools to local conditions. Authority has been granted to achieve water quality objectives at the regional jurisdictional level. The success of these programs will largely depend on the wisdom and will of those regional governmental leaders , as discussed below.What are the major similarities and distinctions between different approaches to agricultural non-point source pollution regulation available in the U.S. and Europe? And, which are most effective? This chapter examined the defining characteristics and application of six policy tools, each of which have been proposed for agricultural pollution abatement. As noted in the introduction, the task of comparing tools is complicated by the multiple facets and dimensions embedded in each tool . While research suggests that a mix of policy tools will outperform any one instrument , clear strengths, weaknesses and unique traits distinguish tools from one another and should be taken into consideration when regulators choose means to meet environmental goals. Table 2-1 lists several categories by which to compare a select group of policy tools. As the table illustrates, a number of key relationships are particularly important. Emphasis is placed on the difference between tools tied to emissions and those not tied to emissions. The clear benefit of tools tied to emissions is their ability to track and measure environmental improvements. However, therein lies these tools’ biggest weakness: Reliance on proxies to predict the extent of environmental improvements.

The information burdens needed to construct models that adequately predict the impact of a farm’s discharges are so great that many practitioners and scholars have shrugged off the task as impossible. Encouragingly, a growing body of literature and scholarly discussions show prospect for improved computer simulation efforts. Until more robust models are designed with improved information, policymakers will continue to rely on the second category of tools—those not tied to emissions. Tools untethered to specific pollution targets work by encouraging water quality improvements through incentives, contracts and/or information. These tools tend to be more politically favorable, but less effective by themselves, save one—the dirty input limit. While capacity tools can provide important information to farmers and best management practices may improve water quality, the DIL can prevent pollutants from ever reaching rivers and lakes, or even farms. With the U.S. pesticide and storm water regulatory programs as models , regulating inputs has the potential to achieve more than regulating emissions. But the DIL is not without obstacles, including heavy reliance on scarce information to set the appropriate limitations and political will to restrict chemical or fertilizer production and/or use. Non-point source pollution, or pollution that comes from many diffuse sources, continues to contaminate California’s waters . Agricultural non-point source pollution is the primary source of pollution in the state: Agriculture has impaired approximately 9,493 miles of streams and rivers and 513,130 acres of lakes on the 303 list of waterbodies statewide . The 303 list is a section of the Clean Water Act mandating states and regions to review and report waterbodies and pollutants that exceed protective water quality standards. Agricultural pollution in California’s Central Coast has detrimentally affected aquatic life, including endemic fish populations and sea otters, the health of streams, and human sources of drinking water . Despite the growing evidence of agriculture’s considerable contribution to water pollution, the agricultural industry has, in effect, been exempt from paying for its pollution, and more importantly, has failed to meet water quality standards. How to best manage and regulate non-point source agricultural water pollution remains a primary concern for policymakers and agricultural operators alike. This case study focuses on the Conditional Agricultural Waiver in California’s Central Coast, the primary water pollution control policy in one of the highest valued agricultural areas in the U.S. The Central Coast Regional Water Quality Control Board is under increasing pressure to improve water quality within its jurisdiction, especially with the added onus from a 2015 Superior Court ruling that directed the Regional Board to implement more stringent control measures for agricultural water pollution. Pressure on the Regional Board is exacerbated by regulatory budget constraints, interest groups, and by unanticipated events. Given these pressures, choosing appropriate criteria by which to evaluate the success of California’s primary agricultural water quality policies is complicated, but of critical importance. This policy analysis explores the complex process of negotiations, agendas and conditions at the heart of policy-making, highlighting areas where the 2004 and 2012 Ag Waiver has succeeded in achieving its goals, as well as where it has fallen short. The analysis is divided into two parts.

Marshall escorting the new legal owners attempted to evict the tenants of the Mussel Slough ranch

Moreover, what is interesting is how literary form follows, informs, or accompanies these forms of Social Darwinism. In the U.S., literary naturalism accompanies biological racial theory, and food secures a sense of nature that spans the range from agricultural production to physical consumption. In China, it is popular songs and literary representations of discussion, of liberal exchange of ideas, that attempt to call the new national community into being. Here artists demystify the commodification of food in order to map unequal trade relations and advocate for independence based on food sovereignty.Explaining why he wrote The Octopus, Frank Norris said that he believed the settling of the American West had been of such world-historical import that it deserved to be told in a great work of literature. His view of the West was heavily influenced by Frederick Jackson Turner’s famous thesis, in “The Significance of the Frontier in American History” , that the frontier had been the decisive factor in factor in shaping a distinctively American culture, and moreover that this period was now at an end. When the 1890 census found that nearly all “frontier” land had been occupied, this meant that the first chapter of American history was over, while the next chapter remained unclear. Thus Norris wanted to celebrate the frontier, but also to memorialize it, to monumentalize it in a loftier literary form than the popular western genre fiction. Having studied the form of the medieval romance at the University of California, he dreamed of seeing a Song of Roland for modern America, a song of the West. He planned a trilogy of novels, or following his interest in medieval literature,10 plastic plant pots what we might call a song cycle. The first novel, The Octopus, was based on a historical event, known as the Mussel Slough Incident, a deadly 1880 land dispute between the Southern Pacific Railroad and wheat growing ranchers in Tulare county, in California’s central valley.

Ostensibly weighing the conflicting interests of the ranchers and the railroad, The Octopus is ultimately more interested in placing the Mussel Slough incident within the larger geographical scale of the emergent global wheat trade and the larger temporal scale of the closing of the frontier. Following The Octopus’s description of wheat production on newly-industrialized California farms, the second book, The Pit , traces the wheat’s distribution through commodities markets in Chicago, and the never-completed third book was to cover consumption “in a famine-stricken Europe or Asia,” as he wrote in a synopsis . The song of the West turns out to be the story of the expanding global market for American agricultural commodities. Norris’s epic scope did not prevent him from conducting detailed historical research into the Mussel Slough Incident itself. The dispute centered on the price at which the Southern Pacific would sell the land abutting the railroad, which had been granted them by the federal government. The railroad circulated advertisements soliciting the public to lease the land from them temporarily, apparently with the option to purchase it for between $2.50 and $5 per acre. The ranchers who leased these large plots of land pooled their capital to build an irrigation system that transformed the arid region into productive farmland for wheat and hops. Once the crops were a success, however, the railroad declared that the land would be sold at market value between $17 and $40 per acre, and that the tenants would have to either pay or move out. In response, the ranchers organized a Settlers’ Land League and armed themselves to defend their claims. On May 11, 1880, a U.S. In the shoot-out that followed eight men were killed, most of them ranchers shot by one of the new owners. While many readers at the time of the book’s publication praised its attack on the railroad monopoly and support for the common farmer, later generations have emphasized that Norris portrays the ranchers as capitalists who care more about windfall profits than about hard work or the land, the traditional virtues of Jeffersonian agrarianism.

Indeed, the author emphasizes the ploy of the Settlers’ Land League to influence the election of a state commission that would favor their side in the legal case—when this corruption is exposed near the end of the novel the ranchers lose their popular support. Norris maintains a distance from the ranchers by telling much of the action from the perspective of an outsider, Presley, who is a San Francisco poet visiting his friend, Buck Annixter, one of the ranchers who will eventually be killed. Presley is hoping, like Jack London and Norris himself, to write the first great literary work expressing the essence of the American West. There is some disagreement among scholars over how the land is portrayed in the novel, and in this it is helpful to note that Presley tries out multiple writing styles as his view of area changes. In the first chapter Presley witnesses the beauty of the natural environment, and goes on to record it in a pastoral celebration of beauty and harmony. In a strange ending to the chapter, Presley repeats word-for-word in his writing long passages that had appeared as narrative description ten pages earlier, and in this way Norris self-referentially emphasizes both the centrality of Presley’s perspective and also that the novel itself is a work of descriptive writing. In the next chapter, however, the pastoral landscape is replaced by images of the massive new farm equipment used in planting the wheat, which Norris depicts this in graphic terms as the sexual union between the machine and the earth. At this point, Presley is forced to confront the land dispute and the competing economic interests that are driving the industrialization of agricultural commodities, and attempts to incorporate these into an enlarged view of the West. The industrialist Cedarquist assures Presley and the ranchers that the continued expansion of American agriculture depends on reaching the inexhaustible demand of the China market. A famine in India provides the opportunity for him to arrange a humanitarian shipment of grain, which serves as a test run ahead of increasing transpacific exports.

After the victory of the railroad, Presley tries his hand at politically committed poetry, publishing a successful georgic poem titled “The Toilers.” Local attempts at political mobilization fall apart, however, after the Settlers’ League’s conspiratorial plot to influence the commission is exposed. Resigned to the power of industrial progress, Presley decides to accompany Cedarquist’s famine relief voyage. The novel ends with him looking out to sea,plastic pots large as he decides that his friends’ deaths do not mean much in the grand scheme of things. All is for the best in the best of all possible worlds, for toilers may come and go, “But the WHEAT remained” . Because it is ultimately the story of large-scale natural and historical forces that dwarf the characters’ moral choices, The Octopus is classed as work of literary naturalism. Florian Frietag points out that while all farm novels must feature natural forces to some extent, it is the total failure of the characters’ attempts to influence the social world around them that gives The Octopus a specifically naturalist form as compared to most American farm novels. At the same time, I believe it is also worth keeping in mind Norris’s own preferred formal terms from ancient and medieval poetry rather than modern prose, the epic and the romance. It is an epic because it is intended as telling the heroic story of a whole people. And yet it is a “naturalist epic” in that, however improbably, humans ultimately give way to the wheat as true hero of the West, uncontainable as both a commodity and a natural force. All previous work on The Octopus addresses political economy is some way, and just as Norris intended to write one novel each on production, circulation, and consumption of wheat, commentators have tended to focus on one of these moments in the economic sphere as it was organized at the turn of the twentieth century. Environmental critics from Leo Marx to William Conlogue have focused on the rural scene of production and shifting generic conventions for representing it.

Critics primarily interested in naturalist form, such as Walter Benn Michaels and Mark Seltzer, have focused on circulation during the late-nineteenth-century financialization of the economy. Finally, critics focused on race and imperialism, such as John Eperjesi and Colleen Lye, have focused on the export to China and the Chinese cooks on the ranch. What reappears across much of this criticism that focuses on the new economy, however, is a tendency to downplay the land dispute at the center of the plot, since the ranchers are themselves capitalists engaged industrial agriculture. The land dispute plot, however, is crucial to Norris’s goal of writing the true history of the West, especially the transition from the frontier period into a new age. By organizing the first book of the “epic of the wheat” trilogy around a real event, Norris’s overall strategy is to record historical reality and celebrate it within a larger, reassuring narrative of enlarged of production and circulation. The reason that there is so much focus on writing and recording in the novel, I argue, is that Norris sees writing itself as crucial to the history of the west, and hopes, through his own writing, to participate in it. What we see throughout the book is a consistent reversal of commonsense causality: production depends on consumption, the stability of the continent depends on overseas empire, and physical production depends on writing and information management. This is how we should understand the relationship between writing and the land in The Octopus: writing is practical, supporting the development of industrial farming to the point of export to China in a new food empire. As portrayed in the novel, the Mussel Slough incident is a symptom of the lack of access to sufficient demand for industrializing U.S. agriculture. For as the ranches become connected to a global food market, they are exposed both to greater opportunities and increasingly volatile risks. Before the production process is even introduced in the novel, Norris highlights the communications technologies that make “the office […] the nerve-centre of the entire ten thousand acres of Los Muertos” . Magnus and his son Harran would sit up half the night watching “the most significant object in the office,” the stock ticker. The occasions for these transcendent feelings of connection are foreign crises that affect the price of their own wheat. Yet because circulation is limited by the railroad—its physical and geographical capacity as well as its monopolistic organization—there is an equally limited amount of profit that the railroad operators and the ranchers must fight over. This is the central contradiction of the novel, as Norris relates the railroad both to a system of veins that facilitates circulation and also an octopus that strangles the full vital force of production. While the ranchers are awaiting the results of their legal case, the character of Cedarquist gives a long speech proposing the China market as the only long-term solution for American production. A former industrialist transitioning into shipbuilding, he addresses the opportunities made possible by the Spanish-American War, speaking as an oracle from the past to the “youngsters” reading the novel at the turn of the century: “Our century is about done. The great word of this nineteenth century has been Production. The great word of the twentieth century will be—listen to me, you youngsters—Markets” . Cedarquist goes on to explain the fundamental problem of the business cycle, that production must expand to stay competitive, but the saturation of the market leads to bankruptcy for most producers and consolidation of industry into fewer large corporations. Faced with certain degeneracy and death, a staple of the naturalist decline narrative, the booster provides a solution that will save the country: “We must march with the course of empire, not against it. I mean, we must look to China” . Empire—like the wheat or the railroad—is propelled by quasi-natural forces that individuals can neither help nor hinder. This speech takes place at the midpoint of the novel, and the development of the plot ultimately vindicates Cedarquist’s logic, ending with the wheat harvest shipping out for famine relief in India, understood as the transpacific test run for the ships that will export future harvests to China.

The findings in this research are also intended to serve as a quantitative tool to support decision makers

Following a global trend, California has undergone a warming trend in recent decades with more rain than snow in total precipitation volume . Increasing temperatures are melting snowpack earlier in the year and pushing the snowline at higher elevations, resulting in less snowpack storage. The current trend is projected to become more frequent and persistent for the region. As a result, surface water supply is projected to erode with time, while the rainfall will experience increased variability, possibly leading to more frequent and extensive flooding . Rising sea levels will also increase the susceptibility to coastal and estuarine flooding and salt water intrusion into coastal groundwater aquifers . In California that sea level is estimated to rise between 150 and 610 mm by 2050 . As the reliability of surface water is reduced due to the effects of climate change, if water reclamation is not implemented with higher market penetration, the demand on groundwater pumping is expected to increase, resulting in higher energy usage for crop irrigation. Our calculations show that for every percent increase in groundwater pumping over 2015 values, the state would consume an additional 323 GWh y-1 of energy generating a net increase of 8 x 104 MTCO2E y-1 . This additional energy usage will amount to approximately 43.7 million USD for every percent increase in groundwater pumping applied to crop irrigation, calculated in 2015 dollars. Further research is warranted to determine the effect of climate change on carbon footprint associated with the energy requirements for irrigation water, particularly for crops grown exclusively for export and how this carbon emission compares with other societal compartments of the energy portfolio. A sensitivity analysis was performed to show the effect of variable k on the overall carbon footprint associated with the energy savings of applying reclaimed water in lieu of traditional groundwater pumping . For this analysis,blueberry container size the k values ranging between 0.3 and 0.7 kgCO2eq kWh-1 were used to account for the different k within a spatial domain analysed in our study.

Furthermore, this sensitivity analysis addresses the global drive to mandate increasing shares of renewables in power generation portfolios . For example, in 2011 California Senate Bill No. 2 requires electric service providers to increase procurement from eligible renewable energy resources from 20% to 33% by 2020 . In 1994, in its General Assembly meeting to combat desertification in countries experiencing serious droughts, the United Nations defined arid and semi-arid regions as areas having the ratio of annual precipitation to potential evapotranspiration within the range of 0.05 to 0.65 . According to this definition, regions in California and other Mediterranean countries such as Chile, Spain, France, Italy, South Africa and portions of Australia are classified as arid and semi-arid regions. Other regions of the world such as Central Asia, South Asia, East and Southern Africa, Central Africa and West Africa also meet this definition. The information presented in our research is intended to serve as a baseline for reference in areas sharing similar climate conditions as defined by the UNCCD. The study found that currently the use of reclaimed water application in California for the agricultural industry is very low, an average 1% for the period 1998 – 2010. For every percent increase in reclaimed water use in agriculture, the resulting energy saving is 187 GWh yr-1 , which at the current energy cost equates to more than 25 million USD. Aside from the energy saving and economic benefit, the application of reclaimed water for crop irrigation also produces a direct safeguard of 4.2 x 108 m3 in groundwater supply and a reduction in carbon footprint of 4.68 x 107 MTCO2E y-1 . If reclaimed water use increased from the current 1%, the energy savings, carbon footprint reduction, and economic benefits were calculated for both the current power generation portfolio and for the projected increase of renewable energy. Even in the scenario of a substantial reduction of CO2-equivalent emissions by meeting and exceeding targets for renewable energy, the increase in reclaimed water use would still provide a net carbon footprint reduction. Figure 4-7 shows the results of our model calculations. This research is intended to serve as a baseline reference and used as a planning tool to help water resources planners. Specific location, availability of reclaimed water supply, conveyance infrastructure and methods of treatment will influence the calculated results and associated costs presented.

Nonetheless, the results of this study furthers our current understanding on the role of reclaimed water on curbing groundwater withdrawal in an arid and semi-arid region like that of Southern California, by providing the context of its existing usage, estimated energy consumption, carbon footprint reduction, and potential monetary savings that can be realized. The trends observed in this study may be applicable to other regions of the world where water scarcity, energy costs, and climatic conditions require the use of reclaimed water as a sustainable water source.The research hypothesis tested true: in fact, the application of reclaimed water not only preserves groundwater resources but also decreases the energy footprint and carbon emissions associated with crop irrigation. The results show that there are savings in both groundwater supply and energy resources when applying reclaimed water for crop irrigation. For California, the average energy requirement for groundwater pumping was 0.770 kWh m-3 while reclaimed water production with gravity filtration was 0.324 kWh m-3 . Hence, the energy advantage of applying reclaimed urban wastewater for crop irrigation over groundwater pumping within this spatial domain would be 0.446 kWh m-3 . The calculated energy savings for applying reclaimed water in lieu of groundwater resulted in 57.9% reduction of energy usage. Annually, this amounts to approximately 187 GWh y-1 of energy savings for California, creating in a reduction of 4.68 x 107 MTCO2E of carbon emission. If reclaimed water use were increased from 1% to 5%, 10%, 15%, or 20%, the respective total energy savings, monetary savings and carbon footprint reduction would increase linearly. Based on the calculations, reclaimed water required the least amount of energy, whereas ocean desalination had an energy intensity approximately 11 times higher. When compared to traditional groundwater pumping, the energy intensity associated with water reclamation was discounted by 58%, highlighting the importance of reclaimed water as a potential competitive source. The results of this study further our current understanding on the role of reclaimed water on curbing groundwater withdrawal in arid and semi-arid regions.

The trends observed in this study may be applicable to other regions of the world where water scarcity, energy costs,growing raspberries in container and climatic conditions require the use of reclaimed water as a sustainable water source. Quantitative research in the field of exported water is still very much underdeveloped despite the many virtual water studies conducted over the years. The data presented in this research can serve as estimate but further research should address the uncertainty. Enhanced procedures to account for exported water and references should be developed and disseminated. These results highlight the need to consider water use efficiency in agricultural irrigation. Our findings suggest that California’s water resources are being exported outside its borders in magnitudes greater than that of the water consumed by the municipalities within the state. Thus, the state might be vulnerable to water-supply constraints if the trend continues indefinitely into the future. With better water management practices and sound public policies and increased investment in water infrastructure and efficiency, farmers and other water users can increase the yield of each water unit consumed. The current scenario appears to promote a positive feedback mechanism of resource draining resulting in environmental consequences for California’s water resources. California agriculture under growing pressure of water is beginning to explore innovative uses of reclaimed water. Some growers already use reclaimed wastewater in different ways, depending on the level of treatment the water receives. Most common is the use of secondary treated wastewater on fodder and fiber crops. Increasingly, however, growers are irrigating fruits and vegetables with tertiary-treated wastewater producing high-quality crops and high yields. Wong et al., reported that the Cities of Visalia and Santa Rosa have developed projects to irrigate more than 6,000 acres of farmland including a walnut orchard with secondary-treated wastewater. Though the projects were primarily designed to reduce wastewater discharge, both cities have gained from the water-supply benefits of applying reclaimed water. The mix of California crops and planting patters has been changing. These changes are the result of decisions made by large numbers of individuals, rather than the intentional actions by state policymakers. California farmers are planting more and more high-valued fruit and vegetable crops, which have lower water requirements than the field and grain crops they are replacing. They can also be irrigated with more accurate and efficient precision irrigation technologies. As a result, California is slowly increasing the water productivity of its agricultural sector, increasing the revenue or yield of crops per unit water consumed. Over time, these changes have the potential to dramatically change the face of California agriculture, making it even more productive and efficient than it is today, while saving vast quantities of water.

In the past two decades, California farmers have made considerable progress converting appropriate cropland and crops to water-efficient drip irrigation. Much of this effort has focused on orchard, vineyard, and berry crops. Recent innovative efforts now suggest that row crops not previously irrigated with drip systems can be successfully and economically converted. This case provides the example of two farmers converting bell peppers row crops to drip irrigation with great success. Subsurface drip irrigation substantially increased pepper yields, decreased water consumption, and provides greatly improved profits. Due to limited availability of public data, our research could only examine 50 of the top exporting commodities in California. According to the California Department of Food and Agriculture, there are 305 known crops produced in the region. Additional research should be extended to assess the exported water of the remaining 255 crops and to evaluate the overall effects of evapotranspiration for all crops commercially produced in California. Since many regions of California are classified as arid and semi-arid areas sharing similar climate conditions to those of other Mediterranean countries, such as Chile, Spain, France, Italy, South Africa and portions of Australia according to UNCCD. The information presented in our research model can be used as a baseline for reference for calculating exported water of other crops grown in similar climate conditions. Previous study by Nguyen et al., 2015 reported that groundwater pumping consumes approximately 1.5 x 104 GWh yr-1 , making the energy requirement for groundwater irrigation the largest contributor in the food production process. As shown from the results of our calculations, the majority of exported water was in the form of evapotranspiration induced by crop irrigation. Thus, it warrants that further research be conducted to examine the energy being exported as a result of induced evapotranspiration beyond the energy requires to irrigate. This research will shed light on the overall energy consumption in the entire food production process including energy expended within a spatial domain and the exported quantity induced via evapotranspiration. One area of research which has not been conducted is the effects of positive feedback mechanism of the overall exported energy of crops as a result of induced evapotranspiration. Future research should be extended to cover all remaining crops commercially produced in California. The outcomes of this model can be extended to quantify the overall exported energy from irrigation that is lost by induced evapotranspiration to that of the energy consumptions from other sectors of the California economy. The results of this future study will help close the loop on the life-cycle energy consumption analysis for California agriculture industry. Maximizing agricultural crop yield is an important goal for several reasons. First, a growing worldwide population will generate increased demand for agricultural resources. Since expanding the land area devoted to agriculture is often unfeasible, or would involve the destruction of sensitive landscapes such as forests and wetlands, the only way to meet this demand will be to increase the crop yield generated from existing farmland. Second, there are substantial economic incentives for profit-seeking farmers to maximize the yield of their crops, especially given the low profit margins typical of commercial agriculture.

We used Geographic Information System software to geocode the new addresses and obtain coordinates

There are no biomarkers available to assess human exposure to fumigants in epidemiologic studies . Residential proximity to fumigant use is currently the best method to characterize potential exposure to fumigants. California has maintained a Pesticide Use Reporting system which requires commercial growers to report all agricultural pesticide use since 1990 . A study using PUR data showed that methyl bromide use within ~8 km radius around monitoring sites explained 95% of the variance in methyl bromide air concentrations, indicating a direct relationship between nearby agricultural use and potential community exposure . In the present study, we investigate associations of residential proximity to agricultural fumigant usage during pregnancy and childhood with respiratory symptoms and pulmonary function in 7-year-old children participating in the Center for the Health Assessment of Mothers and Children of Salinas , a longitudinal birth cohort study of primarily low-income Latino farm worker families living in the agricultural community of the Salinas Valley, California. We enrolled 601 pregnant women in the CHAMACOS study between October 1999 and October 2000. Women were eligible for the study if they were ≥18 years of age, <20 weeks gestation, planning to deliver at the county hospital, English or Spanish speaking,square plant pot and eligible for low-income health insurance . We followed the women through delivery of 537 live-born children. Research protocols were approved by The University of California, Berkeley, Committee for the Protection of Human Subjects. We obtained written informed consent from the mothers and children’s oral assent at age 7. Information on respiratory symptoms and use of asthma medication was available for 347 children at age 7.

Spirometry was performed by 279 of these 7-year-olds. We excluded participants from the prenatal analyses for whom we had residential history information for less than 80% of their pregnancy. We excluded participants from the postnatal analyses for whom we had residential history information for less than 80% of the child’s lifetime from birth to the date of the 7 year assessment. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 257 children and postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 276 children for whom we obtained details of prescribed asthma medications and respiratory symptoms. Prenatal estimates of proximity to fumigant applications and relevant covariate data were available for 229, 208, and 208 children for whom we had FEV1, FVC and FEF25–75 measurements, respectively. Postnatal estimates of proximity to fumigant applications and relevant covariate data were available for 212, 193, and 193 children with FEV1, FVC and FEF25–75 measurements, respectively. A total of 294 participants were included in either the prenatal or postnatal analyses. Participants included in this analysis did not differ significantly from the original full cohort on most attributes, including maternal asthma, maternal education, marital status, poverty category, and child’s birth weight. However, mothers of children included in the present study were slightly older and more likely to be Latino than those from the initial cohort. Women were interviewed twice during pregnancy , following delivery, and when their children were 0.5, 1, 2, 3.5, 5, and 7 years old. Information from prenatal and delivery medical records was abstracted by a registered nurse. Home visits were conducted by trained personnel during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5-years old. At the 7-year-old visit, mothers were interviewed about their children’s respiratory symptoms, using questions adapted from the International Study of Asthma and Allergies in Childhood questionnaire . Additionally, mothers were asked whether the child had been prescribed any medication for asthma or wheezing/whistling, or tightness in the chest. We defined respiratory symptoms as a binary outcome based on a positive response at the 7- year-old visit to any of the following during the previous 12 months: wheezing or whistling in the chest; wheezing, whistling, or shortness of breath so severe that the child could not finish saying a sentence; trouble going to sleep or being awakened from sleep because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold; or having to stop running or playing active games because of wheezing, whistling, shortness of breath, or coughing when the child did not have a cold. In addition, a child was included as having respiratory symptoms if the mother reported use of asthma medications, even in the absence of the above symptoms.

Latitude and longitude coordinates of participants’ homes were collected during home visits during pregnancy and when the children were 0.5, 1, 2, 3.5 and 5 years old using a handheld Global Positioning System unit . At the 7-year visit, mothers were asked if the family had moved since the 5-year visit, and if so, the new address was recorded. Residential mobility was common in the study population. We estimated the use of agricultural fumigants near each child’s residence using a GIS based on the location of each child’s residence and the Pesticide Use Report data . Mandatory reporting of all agricultural pesticide applications is required in California, including the active ingredient, quantity applied, acres treated, crop treated, and date and location within 1-square-mile sections defined by the Public Land Survey System . Before analysis, the PUR data were edited to correct for likely outliers with unusually high application rates using previously described methods . We computed nearby fumigant use applied within each buffer distance) for combinations of distance from the residence and time periods . The range of distances best captured the spatial scale that most strongly correlated with concentrations of methyl bromide and 1,3-DCP in air . We weighted fumigant use near homes based on the proportion of each square-mile PLSS that was within each buffer surrounding a residence. To account for the potential downwind transport of fumigants from the application site, we obtained data on wind direction from the closest meteorological station . We calculated wind frequency using the proportion of time that the wind blew from each of eight directions during the week after the fumigant application to capture the peak time of fumigant emissions from treated fields . We determined the direction of each PLSS section centroid relative to residences and weighted fumigant use in a section according to the percentage of time that the wind blew from that direction for the week after application.

We summed fumigant use over pregnancy , from birth to the 7-year visit and for the year prior to the 7-year visit yielding estimates of the wind-weighted amount of each fumigant applied within each buffer distance and time period around the corresponding residences for each child. We log10 transformed continuous fumigant use variables to reduce heteroscedasticity and the influence of outliers, and to improve the fit of the models. We used logistic regression models to estimate odds ratios of respiratory symptoms and/or asthma medication use with residential proximity to fumigant use. Our primary outcome was respiratory symptoms defined as positive if during the previous 12 months the mother reported for her child any respiratory symptoms or the use of asthma medications, even in the absence of such symptoms . We also examined asthma medication use alone. The continuous lung function measurements were approximately normally distributed,plastic potting pots therefore we used linear regression models to estimate the associations with residential proximity to fumigant use. We estimated the associations between the highest spirometric measures for children who had one, two or three maneuvers. We fit separate regression models for each combination of outcome, fumigant, time period, and buffer distance. We selected covariates a priori based on our previous studies of respiratory symptoms and respiratory function in this cohort. For logistic regression models of respiratory symptoms and asthma medication use, we included maternal smoking during pregnancy and signs of moderate or extensive mold noted at either home visit . We also included season of birth to control for other potential exposures that might play a causal role in respiratory disease , pollen , dryness , and mold. We defined the seasons of birth as follows: pollen , dry , mold based on measured pollen and mold counts during the years the children were born . In addition, we controlled for allergy using a proxy variable: runny nose without a cold in the previous 12 months reported at age 7. Because allergy could be on the causal pathway, we also re-ran all models without adjusting for allergy. Results were similar and therefore we only present models controlling for allergy. Additionally, for spirometry analyses only, we adjusted for the technician performing the test, and child’s age, sex and height. We included household food insecurity score during the previous 12 months , breastfeeding duration , and whether furry pets were in the home at the 7 year visit to control for other factors related to lung function. We also adjusted for mean daily particulate matter concentrations with aerodynamic diameter ≤ 2.5 µm during the first 3 months of life and whether the home was located ≤150m from a highway in first year of life determined using GIS, to control for air pollution exposures related to lung function. We calculated average PM2.5 concentration in the first 3 months of life using data from the Monterey Unified Air Pollution Control District air monitoring station.

In all lung function models of postnatal fumigant use, we included prenatal use of that fumigant as a confounder. To test for non-linearity, we used generalized additive models with three-degrees of-freedom cubic spline functions including all the covariates included in the final lung function models. None of the digression from linearity tests were significant ; therefore, we expressed fumigant use on the continuous log10 scale in multi-variable linear regression models. Regression coefficients represent the mean change in lung function for each 10-fold increase in wind-weighted fumigant use. We conducted sensitivity analyses to verify the robustness and consistency of our findings. We included other estimates of pesticide exposure in our models that have been related to respiratory symptoms or lung function in previous analyses of the CHAMACOS cohort. Specifically, we included child urinary concentrations of dialkylphosphate metabolites , a non-specific biomarker of organophosphate pesticide exposure using the area under the curve calculated from samples collected at 6-months, 1, 2, 3.5 and 5 years of age . We also included agricultural sulfur use within 1-km of residences during the year prior to lung function measurement . We used similar methods as described above for fumigants to calculate wind-weighted sulfur use, except with a 1-km buffer and the proportion of time that the wind blew from each of eight directions during the previous year. The inclusion of these two pesticide exposures reduced our study population with complete data for respiratory symptoms and lung function . Previous studies have observed an increased risk of respiratory symptoms and asthma with higher levels of p, p’– dichlorodiphenyltrichloroethylene or p, p’-dichlorodiphenyldichloro-ethylene measured in cord blood . As a sensitivity analysis, we included log10- transformed lipid-adjusted concentrations of DDT and DDE measured in prenatal maternal blood samples . We also used Poisson regression to calculate adjusted risk ratios for respiratory symptoms and asthma medication use for comparison with the ORs estimated using logistic regression because ORs can overestimate risk in cohort studies . In additional analyses of spirometry outcomes, we also excluded those children who reported using any prescribed medication for asthma, wheezing, or tightness in the chest during the last 12 months to investigate whether medication use may have altered spirometry results. We ran models including only those children with at least two acceptable reproducible maneuvers . We ran all models excluding outliers identified with studentized residuals greater than three. We assessed whether asthma medication or child allergies modified the relationship between lung function and fumigant use by creating interaction terms and running stratified models. To assess potential selection bias due to loss to follow-up, we ran regression models that included stabilized inverse probability weights . We determined the weights using multiple logistic regression with inclusion as the outcome and independent demographic variables as the predictors. Data were analyzed with Stata and R . We set statistical significance at p<0.05 for all analyses, but since we evaluated many combinations of outcomes, fumigants, distances and time periods we assessed adjustment for multiple comparisons using the Benjamini-Hochberg false discovery rate at p<0.05 . Most mothers were born in Mexico , below age 30 at time of delivery , and married or living as married at the time of study enrollment . Nearly all mothers did not smoke during pregnancy.

We measured changes in total distance moved and photomotor response from behavioral assays

We initiated all acute exposure tests within 24 h of surface water collection. Based on high invertebrate mortality previously observed in water from two of the sites, we made a dilution series of our water samples to capture a wider range of toxic effects including mortality and swimming behavior . For before first flush sampling, we used a dilution series of surface water concentrations—100%, 60%, 35%, 20%, and 12%—in order to evaluate the potential for a wide range of toxicological outcomes. We thoroughly mixed ambient surface water samples by agitation immediately before creating the dilutions in order to homogenize the turbidity levels between dilutions. To create the dilution series, we added control water to ambient surface water to achieve each desired concentration. We repeated this procedure at the 48 h point when performing an 80% water change on all treatment groups. For after first flush sampling, we used a broader dilution series—100%, 30%, 20%, 12%, and 6%—in anticipation of higher chemical concentrations based on previous studies. We tested temperature, total alkalinity, hardness, pH, and dissolved oxygen in situ using a YSI EXO1 multi-parameter water quality sonde at both test initiation and 48 h to ensure that the water remained within the acceptable ranges for D. magna. We chose exposure concentrations of CHL and IMI to mimic environmentally relevant concentrations found in monitored agricultural waterways, as well as experimental EC50/LC50 values. For both CHL and IMI, the low and high concentrations were 1.0 µg/L and 5.0 µg/L, respectively. We purchased chemicals from Accu Standard . We dissolved CHL in pesticide grade acetone to make chemical stock solutions, subsequently diluting it with EPA synthetic control water to a final concentration of 0.1 mL/L in exposure water. Due to its solubility, no solvent was needed to make an IMI stock solution. To account for this difference, we compared CHL treatment data to an acetone solvent control,square pot and IMI to the EPA synthetic control water. The California Department of Food and Agriculture Center for Analytical Chemistry analyzed these chemical stock solutions via LC-MS MS.

Chemical analysis of field water was conducted at the Center for Analytical Chemistry, California Department of Food and Agriculture using multi-residue liquid chromatography tandem mass spectrometry and gas chromatography– mass spectrometry methods. Chemicals were analyzed following procedures described in the Monitoring Prioritization Model as mentioned on the CPDR’s website. Chlorantraniliprole and IMI stock solutions were also analyzed to confirm exposure concentrations. The method detection limit and reporting limit for each analyte are listed in Tables S3–S6. Laboratory QA/QC followed CDPR guidelines provided in the Standard Operating Procedure CDPR SOP QAQC012.00. Extractions included laboratory blanks and matrix spikes. We performed behavioral assays at the 96 h time points for both the chemical exposures and for the field sampling exposures. We designed behavioral assays using Ethovision XT™ software , and adjusted the video settings to maximize the software’s detection of D. magna. We gently transferred organisms from test vessels into randomized wells in a non-treated 24 round-well cell culture plate containing 1 mL of control water at 20 C. We then left them to habituate for at least one hour before moving them to our behavioral assay set up for an additional five-minute acclimation period. The DanioVision™ Observation Chamber had a temperature-controlled water flow-through system, allowing us to keep organisms at optimal temperature throughout the assay. Our CCD video camera recorded the entire plate in which the organisms were held throughout the assay, so in this case 24 individuals were assessed at the same time. Using the Ethovision XT™ software, we then analyzed each video frame identifying the location of the organisms at each time point. Calculations were carried out to produce quantified measurements of the organisms’ behavior including both total distance moved and velocity. This assessment of horizontal movement over time, measured as total distance moved, is useful when trying to determine the changes in locomotor ability of organisms after exposure to pesticides. This system also allows us to control the dark:light cycle throughout the assay in order to measure endpoints related to a light stimulus, including photomotor response. We measured significant changes in photomotor responses as the change in mean distance traveled between the last 1 min of a light photo period and the first minute of the dark photoperiod as described in Steele et al. .

We checked data sets for normality using a Shapiro–Wilk test and applied log transformations before statistical analysis. We used a repeated measure ANOVA to analyze the effects over the light period. Statistical tests were defined by treatment as between-subject factors, and time as the within-subject factor. We applied Dunnett’s multiple comparison test for post hoc evaluation. Data are represented as mean ± standard error of the mean . We exported summary statistics from Ethovision XT using 1 min time bins for each treatment and analyzed the data in GraphPad Prism, version 9.0 . We determined significance of mortality data by Analysis of Variance followed by Dunnett’s test for multiple comparisons one-way analysis using GraphPad Prism, version 8.0. To measure the photomotor response of the organisms, we calculated the difference in distance moved between the last minute of the dark period and the first minute of the subsequent light period for each individual. These data sets were then log transformed and analyzed in GraphPad Prism using a one-way ANOVA with a Tukey’s Post Hoc test of multiple comparisons.Chemicals detected in the water samples collected in September are shown in Table S1, and are described in further detail in Stinson et al. 2021, a parallel study. In brief, of 47 pesticides analyzed, 17 were detected in our surface water samples, and each site contained a minimum of 7 target pesticides. Chlorantraniliprole was detected at all sites at concentrations below the acute lethality benchmarks for invertebrate species exposure . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure , and above the acute invertebrate level at Alisal Creek . Neonicotinoids were detected at all sites. Organophosphates were detected at two of the sites: Quail Creek and Alisal Creek. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, lambda-cyhalothrin, and bifenthrin . Several other chemical detections exceeded EPA benchmark values. Notably, methomyl was detected at Quail Creek at nearly three times the limit for chronic fish exposure ,blueberries in containers and above the EPA benchmark for chronic invertebrate exposure at all sites. Overall, Salinas River contained the smallest total number of chemicals at the lowest concentrations of the three sites we examined. Chemicals detected in water samples collected in November are shown in Table S2. Of 47 pesticides analyzed, 27 were detected in our surface water samples, and each site contained a minimum of 21 target pesticides.

Chlorantraniliprole was detected at all sites below the lowest benchmark . The neonicotinoid IMI was detected above the EPA benchmark for chronic invertebrate exposure at Salinas River , Alisal Creek , and Quail Creek . Neonicotinoids and organophosphates were detected at all sites. Several pyrethroids were detected at levels at or above an EPA benchmark, including permethrin, cyfluthrin, lambda-cyhalothrin, bifenthrin, fenpropathrin, esfenvalerate . Overall, Salinas River contained the smallest total number of pesticides at the lowest concentrations of the three sites we examined. Repeated measures ANOVA showed there were no time-by-treatment interactions, but there were significant effects of treatment, on locomotor activity . Daphnia magna exposed to 35% and 20% surface water from Alisal Creek exhibited significantly hypoactivity compared to the control group under light conditions . Additionally, D. magna exposed to 20% surface water from Alisal Creek exhibited significant hypoactivity compared to the control group under dark conditions of the behavioral assay. Daphnia magna exposed to the highest concentration of surface water from Alisal Creek tested were significantly hypoactive during the last 5 min of the exposure period. Organisms exposed to all concentrations of surface water from Salinas River were hyperactive under light conditions with the two highest concentrations showing the greatest hyperactivity when compared to controls . There was no difference in total distance moved between organisms exposed to the Salinas River dilution series and the control group individuals in the dark period. The photomotor response for organisms exposed to surface water from both Alisal Creek and Salinas River followed a clear log-linear dose-response curve . Both the control and solvent control groups exhibited a reduction in movement consistent with a freeze response. Overall, Alisal Creek exposed organisms showed a greater magnitude of change than Salinas River exposed organisms. There were significant changes in photomotor response across all treatment groups, though responses differed between sampling sites. Daphnia magna exposed to water samples from Quail Creek demonstrated an inverse dose response pattern, where exposure to the lowest dilution gave the most significant change in photomotor response, and exposure to the highest dilution was not significantly different from control groups . The Alisal Creek treatment groups exhibited a non-monotonic dose response, with organisms exposed to the medium dosage having little to no response to light stimulus. The low dilution had a significantly lessened photomotor response pattern, and the highest dilution was not significantly different from the control group . Daphnia magna exposed to all concentrations of surface water from Salinas River had significantly altered photomotor responses as compared to controls. Organisms exposed to undiluted water samples from Salinas River demonstrated an opposite startle response of equal magnitude to the control’s freeze response.Physicochemical parameters for the exposure period are listed in Table S9. Following 96 h exposures, we measured no significant mortality in D. magna after exposure to CHL or IMI, at either the high or low concentrations following the 96 h acute exposure period .

Repeated measures ANOVA showed there were no timeby-treatment interactions for any experiment, but there were significant effects of both time and treatment, individually, on locomotor activity in the CHL/IMI data sets . Both the control and solvent control groups exhibited a large photomotor response consistent with freezing . After exposure to the low level of CHL, D. magna showed hypoactivity under dark conditions . For D. magna exposed to both low and high treatments of IMI, we saw significant hypoactivity during the entire behavior assay period, under both light and dark conditions . Exposure to mixtures of CHL and IMI resulted in divergent total distance moved measurements under both light and dark conditions. Individuals from the low CHL/low IMI treatment group were hypoactive in dark conditions. In contrast with the single chemical exposures, individuals from the high CHL/low IMI treatment group were hyperactive under light conditions. We measured significant changes in photomotor responses between the last 1 min of a light photoperiod and the first minute of the dark photoperiod . The change in total distance moved during the dark:light transition is shown in Figure 3D–F. For both CHL treatments, organisms exhibited no response to light stimulus , representing a nearly 60-fold difference in response from the control group. Organisms exposed to low IMI had an inverse response to light stimulus when compared to the control group, increasing their total distance moved in response to light stimulus. Organisms exposed to high IMI exhibited a reduction in their average total distance moved, but this response was fivefold smaller than controls. Mixtures of CHL and IMI resulted in the most divergent photomotor response, when compared with controls. Daphnia magna in all binary treatment groups, with the exception of the low CHL/low IMI group, showed an inverse photomotor response from controls. Surface water from all sites contained CHL and IMI as components of complex mixtures from surface water at all sites, both before and after a first flush event. Several chemicals detected from these sites are known to have sublethal effects on D. magna, including IMI, CHL, bifenthrin, clothianidin, malathion, methomyl, and lambda-cyhalothrin . The changes in pesticide composition and concentration between the sampling dates concurred with results from previous chemical analyses in this region. Pesticides of concern including CHL and IMI were detected at higher concentrations after the first flush event . A study examining first flush toxicity in California found that the concentration of pollutants was between 1.2 and 20 times higher at the start of the rain season versus the end. Interestingly, the sampling site with the highest increase in concentration after first flush, for several pesticides of concern, was the Salinas River site.

Discharges from agricultural non-point sources are inherently difficult to monitor because they are diffuse in nature

Agencies that supported the survey included the Monterey County Farm Bureau, the University of California Extension, the Agriculture and Land Based Training Association, and the Agricultural Water Quality Agency. Each agency requested results from the survey, as well as a presentation to their organization. Additionally, I plan on distributing a two-page summary of results to all growers who participated in the survey. Another part of this doctoral research that helped forge partnerships is through my work on Chapter 5. Data analysis in this chapter included spatial analysis of regional pesticide use over the past 13 years. In designing this chapter, I met with third-party monitoring agencies, G.I.S. technicians, and faculty members to ensure the highest quality data was used and that the research results would be of use to growers and policymakers. The spatial analysis of several pesticides known to be sources of water column and sediment toxicity in the region show the impacts, both negative and positive, of the primary regional agricultural water quality mandate that specifically targets two organophosphate pesticides. Results have already been distributed to Regional Water Quality Control Board staff members, who have passed them along to other networks and agencies. Research results from this dissertation have been and will continue to be shared with academic audiences, agricultural operators, policymakers, water quality agencies, and the general public in peer-reviewed publications, conference proceedings, reports, magazine articles, poster presentations, and oral presentations. Links to all published research are posted on my graduate student website. Throughout the data collection process, I maintained thorough records in both my notebooks and on electronic devices, and all stored electronic data have been backed up and preserved. Records of all interviews, survey questions and responses, datasets,large plastic pots and methodologies were retained to ensure reproducibility. I received exemption from IRB Review for both the interviews as well as the survey conducted in this research.

Agricultural non-point source pollution—runoff and leaching into nearby water bodies from nutrients, pesticides and soil sediments—is the chief impediment to achieving water quality objectives throughout the U.S. and Europe. Consequentially, policymakers cannot employ the old standbys used to regulate point sources of pollution, which are emitted from an identifiable pipe or outfall. Instead, regional, state, and federal agencies have typically relied on voluntary, incentive-based approaches to manage non-point source pollution . Such approaches have resulted in unsuccessful agriculture NPS control. In the U.S., agricultural pollution is the leading cause of pollution to rivers and lakes . And in Europe, agriculture contributes 50-80% of the total nitrogen and phosphorus loading to the region’s fresh waters and sea waters . The inadequacies of current approaches have triggered academic and regulatory discussions about how to proceed with abating non-point sources . These issues pose particularly challenging questions about appropriate regulatory tools, jurisdictional boundaries, funding needs, monitoring requirements, pollution permit allocations and stakeholder collaboration. Drawing from environmental policy and environmental economics literature as well as case studies from the U.S. and Europe, the aim of this chapter is to assess agricultural NPS pollution management approaches and the factors that drive or impede their implementation and enforcement. The E.U.’s recent Water Framework Directive presents an opportunity to build on lessons of the earlier-promulgated 1972 U.S. Clean Water Act, while the U.S. can benefit from the implementation and enforcement of effective European water pollution controls. This research presents several policy tool frameworks to help characterize the widespread non-point source pollution problem in the U.S. and Europe, distinguishing its unique set of hurdles from other environmental policy problems.

Findings suggest that controlling numerous diffuse sources of agricultural pollution requires an integrated approach that utilizes river basin management and a mix of policy instruments. Additionally, this chapter finds that transitioning from voluntary mechanisms to more effective instruments based on measurable water quality performance relies predominantly on three factors: more robust quality monitoring data and models; local participation; and political will.Since the passage of revolutionary water quality policies in the 1970s, the U.S. and Europe have seen significant water quality improvements in point source discharges—defined as any discernible, confined and discrete conveyance. Over the past 40 years, industrial pollution and discharges of organic wastes from urban areas and publicly owned treatment facilities have dropped substantially, and dissolved oxygen levels have increased downstream from point source pollution. This success can largely be attributed to the use of a transformative technology-based command-and-control approach, which employs standards to control pollutants at the point of discharge, setting uniform limitations based on the “Best Available Technology” for a given industry. Technology-based effluent limits have been enshrined in both the 1972 U.S. Clean Water Act and various European environmental policies. The technology-based regulatory framework skillfully transformed water quality regulation for point sources into a remarkably more streamlined and simplified system with successful results; it unfortunately neglected the different and more difficult task of controlling non-point source pollution. Instead, individual states in the U.S. and Member States/river basins in Europe have been entrusted with the monumental task of NPS pollution control. The 1972 Clean Water Act and subsequent amendments largely shape present-day water quality policies . During the drafting of the CWA, non-point source pollution was not perceived as serious of a problem as point source pollution , and was only considered as an afterthought . Prior to 1972, the nation’s general approach to water pollution was disjointed and highly variable—analogous to non-point source pollution regulation today. Control mechanisms were decentralized, which resulted in each state developing its own method of protecting water quality.

While several states attempted to implement innovative water quality standards and discharge permits, the vast majority failed to improve water quality conditions. A fundamental weakness of relying on ambient standards was that states needed to prove which polluters impaired water quality and to what extent. This endeavor was extremely difficult given that the regulatory agencies possessed very little data about the location, volume, or composition of industrial discharges . Even if data were available, water agencies were often understaffed, under budgeted and had inadequate statutory authority. By the 1960s, many of the country’s rivers and streams had reached such abominable conditions that a growing population of frustrated U.S. citizens turned to the federal government for help. After years of delay and struggle, the U.S. was ready to formulate a comprehensive, unified regulatory structure, resulting in the 1972 Clean Water Act. The Act employed a command-and-control approach to implement technology-based standards,raspberry container enforced by National Pollution Discharge Elimination System permits . This approach, aimed at controlling pollutants at the point of discharge, set uniform limitations based on the best available technology pertaining to a particular industrial category. To implement and monitor performance, every point source was required to obtain a permit to discharge. Under this innovative system, enforcement officials need only compare the permitted numerical limits with the permittee’s discharge. Technology-based effluent limits have transformed U.S. water quality regulation into a remarkably more streamlined and simplified system with successful results . In addition to the technology standards, the drafters of the Clean Water Act held on to the historic water quality-based approach, despite its observed inadequacies. In an attempt to bridge the gap between discharges and clean water , dischargers were expected to comply with more stringent, individually-crafted effluent limitations based on water quality standards . This additional control tool is only implemented when technology-based controls are not sufficient in meeting beneficial uses. The process entails a few ostensibly straightforward steps: first, the state lists each impaired waterbody within its jurisdiction; second, the state designates a “beneficial use” for each waterbody; third, a Total Maximum Daily Load or “TMDL” for each waterbody is calculated based on the designated beneficial use; and finally, a portion of the load is allocated to each point or non-point source. However, the fundamental problem of TMDLs is that they must be translated into specific numerical discharge limitations for each source of pollution . This endeavor is often prohibitively expensive and extremely difficult given that every step of the regulatory process— from identifying and prioritizing impaired waterbodies to allocating emissions loads to measuring the program’s success— suffers from insufficient and poor quality information . Monitoring data are needed to assess, enforce, evaluate and use as a baseline for modeling efforts. The task of collecting these emissions data—identifying polluters that are difficult to pinpoint, monitoring discharges that are stochastic and virtually impossible to track, and connecting diffuse effluents back to their sources—is so problematic they have been stamped “unobservable” . The paucity of information is often the result of another, more tangible limitation when implementing non-point source pollution abatement mechanisms: budgetary and administrative constraints. Funding the monitoring efforts as well as the staff time to adequately oversee water pollution control efforts is an obligatory, but often missing component in water management programs. Also, a lack of enforcement in areas where management practices are not protecting water quality remains a widespread problem throughout agricultural NPS programs .

While individual river basins and states have varying water quality issues and employ slightly different approaches to abate non-point source pollution, each bears the burden of these similar hindrances. Clearly, the challenges and complexities of non-point source water pollution are not amenable to technology and emission-based policy tools historically used. Current discussions on how to proceed with non-point source pollution abatement strangely and sadly mirror those occurring over forty years ago. In describing the difficulty of implementing water quality standards in the 1960s, Andreen presents several questions still debated today: How should regulators allocate the capacity of a stream to a multitude of diffuse dischargers? Should the allocations be recalculated every time there is a new or expanded discharge? What should be the boundaries of a receiving waterbody—an entire river system or should each tributary be considered separately? Likewise, Houck describes the current state of U.S. non-point source pollution policy as: “slid[ing] back into the maw of a program that Congress all but rejected in 1972, among other things, its uncertain science and elaborate indirection.” Similar to the U.S., the first surge of European water legislation began in the 1970s. This “first wave” was characterized by seven different Directives, which were initiated by individual Member States with little coordination with the larger E.U. community . During the late 1990s, mounting criticism on the fragmented state of water policy drove the European Commission to draft a single framework to manage water issues . The resulting legislation, the Water Frameworks Directive , has been championed as “the most far-reaching piece of European environmental legislation to date” . Adopted in December 2000, the WFD replaced the seven prior “first wave” directives. Just as the Clean Water Act passes down authority to states in the U.S., the WFD gives each Member State and its river basins the same responsibility. Under this “second wave,” the WFD requires that River Basin Management Plans be established and updated every six years. The RBMPs specify how environmental and water quality standards will be met, allowing local authorities the flexibility to comply as they best see fit. The WFD mandates that all river basins must achieve “good” overall quality, and that more stringent standards need to be applied to a specific subset of water bodies used for drinking, bathing and protected areas. Two additional requirements of the WFD are economic analyses of water use and public participation in the policy implementation process. The E.U. chose management at the river basin level, a hydrological and geographical unit, rather than political boundaries, to encourage a more integrated approach to solving water quality problems . Another distinguishing aspect of the WFD is its “combined approach,” which guides Member States’ choice of policy tools. Similar to the U.S. CWA approach, technology controls based on Emissions Limit Values, such as those embedded in the previous E.U. Integrated Pollution Prevention and Control Directive , are implemented first. The IPPC works similarly to the U.S. NPDES permit system , requiring all major industrial dischargers to obtain a permit and comply with specific discharge requirements. If these emissions and technology-based instruments are not sufficient in meeting water standards, then Environmental Quality Standards are employed. The Water Framework Directive provides opportunities and challenges for all actors involved—Member States, European Commission, and candidate countries .