Category Archives: Agriculture

The Center aims to develop next-generation technologies to realize IoT-enabled precision agriculture

IoT4Ag launched its collaborative programs across the four NSF ERC pillars of convergent research, engineering workforce development, diversity and culture of inclusion, and innovation ecosystem. IoT4Ag research is creating novel, integrated systems that capture the microclimate and spatially, temporally, and compositionally map heterogeneous stresses for early detection and intervention to better outcomes in agricultural crop production. The Center is working to realize IoT technologies to optimize practices for every plant; from sensors, robotics, and energy and communication devices to data-driven models informed by plant physiology, soil, weather, management practices, and socio-economics. Diverse participant groups have been and continue to be recruited and are being educated through IoT4Ag workforce development and diversity & culture of inclusion programs to have strong science and engineering knowledge to create transformative, socially-just, engineered products and systems. The Center is working to build a workforce able to discover, innovate, translate, and practice precision agriculture solutions. IoT4Ag has established and continues to expand an innovation ecosystem and network with academic, industry, investment, and government partners and the end-user farming community to collaboratively build the future of precision agriculture. IoT4Ag’s research program aims to transform agriculture today and invent integrated systems to realize the farm of the future . IoT4Ag is working to create next-generation IoT sense-communication response technologies and establish engineered integrated systems for precision farming of tree crops and row crops, mainstays of the food supply chain.

The Center’s research is driven by the agricultural-specific use case of IoT, e.g., its scale, the environment, and the socioeconomics. It is pushing the fundamental scientific understanding and bringing together the tools of our disciplines, i.e., the fields of agronomy, agricultural engineering, agricultural economics, environmental science, and of chemistry and chemical engineering, computer science, and electrical, materials, mechanical, and systems engineering. It is propelling us, in partnership with our innovation ecosystem, to create “IoT4Ag breakthrough technologies” in sensors, robotics, and energy and communication devices to inform data-driven models constrained by plant physiology, soil, weather, management practices, and socioeconomics that enable the optimization of farming practices for every plant. Integrated systems engineered from these technologies are being designed to capture the microclimate and spatially, temporally, and compositionally map heterogeneous stresses for early detection and intervention to ensure better outcomes in agricultural crop production. The Center is structured into three thrusts that vertically integrate fundamental knowledge and technology from different disciplines and that are horizontally integrated to achieve next generation engineered systems for agriculture. The “flow” or “wiring” diagram in Fig. 3 portrays the scientist’s or engineer’s depiction of the structure and connectivity of the three thrusts to realize the sense-communication response integrated systems of the farm of the future to realize better outcomes in agricultural crop production. The diagram also shows the structure and connectivity of IoT4Ag thrusts and the requisite convergence of disciplinary expertise. Plant and environmental scientists are exploring the biotic and abiotic variables that affect crop health and are working together with engineers to design and specify sensors that are embedded in the field, to measure these variables from above and below the soil surface. Multi-mode sensors are being co-designed and co-created with energy and communications technologies for the agricultural use case that calls for sensor systems that require zero- or near-zero power, are low cost, can be deployed at large scale, are bio-compatible/biodegradable,hydroponic nft gully and can operate below the soil surface and in/below the canopy. Signals are transmitted at the “edge” to existing farm machinery or to ground and aerial robots, that are being adopted by the farming community.

Robots are being codesigned and equipped with energy and communications technologies to allow autonomous, coordinated multi-robot excursions at the large scale of agricultural fields and to receive and process signals at the edge, directly imaging the field and indirectly imaging sensors from above and below the canopy. A suite of Ag-specific backhaul technologies are investigated to transmit signals to the cloud in the characteristically remote and “unconnected” environments of agricultural fields. Multiple instance, multiple resolution sensor fusion techniques are being developed to unite the spatially, temporally, and compositionally heterogeneous sensor data. Models that are data-driven, and yet constrained by the biophysics of plant physiology, the soil, weather, and management practices are being created to “make the invisible visible” and provide “better data”. These models are being used to build a decision Ag interface, which coming full circle, allows farmers to intelligently manage their fields to ensure crop yield and resiliency in a cost-effective manner. Thrust 1 research is in the design and manufacture of resilient, networked, intelligent sensor-robotic systems that monitor the state of plant and soil health over extended areas. Thrust 1 is addressing fundamental scientific questions to uncover how the complex system of abiotic and biotic variables affect crop yield and resilience, and with this knowledge is designing technologies and systems that will be deployed with the spatial, temporal, and compositional resolution needed to capture the state of the field. Thrust 1 unites faculty research groups from eight departments across all four partner universities with expertise in plant and environmental science and in sensors, robotics, and mapping of agricultural fields. hrust 2 research is in enabling advanced approaches for powering IoT devices and robots in the field and for data communication from heterogeneous platforms of sensors, robots, and farming equipment. Thrust 2 is working to establish the knowledge and technologies specifically needed in agriculture, from powering devices and communicating from below the soil surface to deploying technologies at field scales.

Thrust 2 is composed of faculty groups from four departments and three of our universities with expertise in IoT sensor and robotic power and in edge and backhaul communication. Thrust 3 research is in building and deploying smart response systems that are driven by machine learning and decision-based models for precision agriculture. Thrust 3 is creating techniques to manage uncertainty and fuse the spatially, temporally, and compositionally heterogeneous data from the field to collect not just more, but better data. The thrust is building models, constrained by the biophysics of plants in agricultural fields, to establish a decision-Ag interface for growers to intelligently manage their fields in a cost-effective manner. Thrust 3 is brings together faculty groups from seven departments and our four universities with expertise in machine learning and sensor fusion and in controls and decision agriculture architectures. Fig. 4 is a Milestone Chart describing the work of the thrusts to deliver IoT4Ag technologies and to increase their complexity and scale over the lifetime of the Center to realize the two IoT4Ag testbeds, i.e., 1) Integrated Systems for Precision Farming of Row Crops and 2) Integrated Systems for Precision Farming of Tree Crops. In Year 1, 28 multi-institutional, multi-disciplinary, multi-thrust research projects vertically integrating the ERC 3-planes of fundamental knowledge, enabling technologies, and integrated systems across three horizontally integrated research thrusts were launched. A number of projects are operating within the IoT4Ag test beds. The fundamental knowledge and enabling technologies are intimately connected. For example, IoT4Ag is working to probe the theoretical limits of electromagnetics, important to understanding signals in the soil, canopy occlusion, and signal interference; to create of a suite of Ag-specific communication technologies that connect sensors located in remote and obstructed agricultural environments to the cloud. The Center is advancing materials properties and processes, e.g., from host guest chemistry to low-cost processable, biodegradable, and biocompatible materials, to realize sensors, that measure variables of interest, and energy devices, that operate in the soil, and allow agricultural field scale measurements. IoT4Ag is developing machine learning approaches to deliver robust predictive models that effectively capture site-to-site variability due to environmental changes and decision science to synthesize Decision Ag interventions that are interpretable, risk-based, and economically feasible. Finally, and coming full circle, IoT4Ag ‘sense communication-response’ technologies are impacting agronomy,aeroponic tower garden system addressing fundamental scientific questions such as understanding how abiotic and biotic variables affect crop yield and resilience. The Center is educating diverse groups of students and professionals to build and practice precision agricultural science, IoT technologies, and systems. IoT4Ag is engaging K-12 and community college students; through exhibits, kits, and lessons/labs with our partner schools, museums, and organizations; high-school students and teachers and community college and undergraduate students in research experiences; PhD and postdoctoral fellows in interdisciplinary research and intraCenter and international exchange; and agricultural professionals and growers through IoT4Ag Ag-extension programs.

IoT4Ag is committed to creating, sustaining, and promoting a diverse community by developing and delivering programs, based on good practices, to create transformative changes in engagement, equity, and inclusion of diverse groups in science and engineering and in the practice of agriculture that creates a lasting sense of belonging for Center members and a positive, productive, collaborative climate. The IoT4Ag ERC provides a platform of disciplinary, institutional, and demographic diversity amongst the core institutions and its partners in research, workforce development, and innovation ecosystem to unite and include diverse groups as they educate each other and work collaboratively to achieve the common goal of realizing food, energy, and water security to benefit society through the development of transformative, socially just engineered products. Diversity & Culture of Inclusion educational programs foster critical reflection about issues that intersect innovation and equity, such as facilitating technological access in underserved communities, ethics in agriculture, data governance, and algorithmic and implicit bias. IoT4Ag research efforts will lead to systems that combine state-of-the-art sensors, robotics, communications, and data science approaches for monitoring the state of a field of crops with high spatial and temporal resolution and making decisions on this data using bio-physically informed models. Even with successful achievement of the IoT4Ag mission to create and translate these precision agriculture technologies and systems, a mission which is necessary to realize the overarching vision of improved crop yields with less water, energy, and fertilizer use as outlined in Section I, it may not be sufficient. Technical and non-technical challenges that are outside the scope of the Center could limit the impact of IoT4Ag technologies and systems and prevent the vision from being achieved. Three primary risks are briefly discussed here. First, the Center will have highest impact if local interventions can be made quickly and cost effectively based on the data and insight provided by IoT4Ag integrated systems. The development of intervention approaches is not within the scope of IoT4Ag’s work, so these technologies must be developed by other researchers and companies. If approaches for local interventions do not advance quickly enough, IoT4Ag systems may have less impact than anticipated. To mitigate this risk, we are creating systems designed to work with both existing and more nascent local interventions. Furthermore, we are continually keeping track of the state of intervention technologies, in part through connections with industry members and end-users in the Center, and will adjust our technology road map based on internal and external advances. Second, IoT4Ag systems are being developed to take advantage of data from multiple sources, this includes our own sensors, commercial sensors and agricultural equipment, and public and private sources. Standards and policies for accessing, sharing, and using data from a number of these sources is quite variable and also evolving as precision agriculture technologies develop. We are working to mitigate this risk by engaging stakeholders, including end-users and agricultural companies, through the Center to understand perceptions and expectations regarding data privacy and accessibility. As we develop decision systems in Thrust 3, issues related to data standards and access are actively being addressed in our projects. Finally, IoT4Ag systems will only have impact if the technologies are adopted by end users. Adoption is not guaranteed even if the systems are engineered to meet performance targets and economic constraints. Adoption will also require education of end users on the benefits and implementation of IoT systems which are different from existing management practices. To mitigate this risk, we are and will engage with members of IoT4Ag’s ASAB and agricultural professionals, including crop consultants, through our research on adoption and our professional education activities as part of our workforce development, to identify routes and broadly disseminate information about IoT4Ag systems. The IoT4Ag logic model for the Center convergent research pillar is shown in Fig. 5. The model highlights the convergence of institutionally, disciplinarily, and demographically diverse IoT4Ag faculty and students from academia with partners in education, government, industry, and the end-user farming community.

The positive effect of linear habitat is more pronounced for bat species with structure-bound ecologies

Training programs should be led by African farmers who understand the nuanced underpinnings of each respective region, as opposed to outside actors. Some argue that “food sovereignty based on genuine agrarian reform, and the defense of land territory against land grabbing, offers a real alternative……is the only way to protect national food economies from predatory dumping, hoarding, and speculation” . However, if government self-sufficiency programs involve no costs to farmers, there is no competition or innate bias towards the wealthy who would theoretically consolidate and redistribute land at the expense of poorer farmers. In this model, it would be most important for there to be representation for peasant farmers in meetings with leadership to ensure the voice of their constituents are heard and integrated into the planning and implementation of food projects, but smallholders do not necessarily need to possess control over the food system in order for food security to be restored. The role of the government is important in streamlining and managing collections of effective indigenous stores of knowledge and heirloom seeds, consolidating and dispersing them through state programs that involve free or low costs training and seed. It is observed that “hen profits are not forthcoming for commercial products, companies lose interest in agricultural development, soil conservation…and in peasant farmers as consumers of seeds and fertilizers and producers of commodities” . Furthermore, when a nation’s exports diminish in value, its people do not become less hungry nor burn less energy; they must still be fed in the absence of capital to purchase imports. It is the duty of governments to prioritize efforts for the public good, where private enterprise and international markets fail to do so. The economic incentive for leadership lies within having a well-fed, and therefore,dutch buckets healthy and cognitively-engaged workforce; and in the ability to eventually invest in export-focused opportunities with less risk, once populations are adequately fed on domestic supplies.

It is imperative that at least some of the current export-focused efforts are redirected towards food self-sufficiency until rates of malnutrition decline and hunger remedied. As previously mentioned, rural areas of SADC nations lack much needed infrastructure and surplus storage facilities. This, among other things, eventually must change. In the meantime, more attainable goals include cultivating and pooling knowledge of agroecological methods and disseminating said knowledge through government extension networks. State programs such as these formerly existed in Southern and East African nations but were cut in favor of neoliberal interventions under the strict guidelines of structural adjustment programs. In the 1960s and 1970s, newly independent African nations deliberately focused on self-sufficiency efforts, even as almost all export relationships with former-colonial powers remained intact- an example for the present day. The production-driven approach of the first Green Revolution had many shortcomings. Participation in initiatives that promised greater yields, and subsequent higher earnings, required upfront investments that priced out certain groups. These programs privileged the relatively well off, leaving broadening wealth gaps in their wake and increased poverty and displacement among those hit hardest by hunger and malnutrition. Lessons from the past are meant to better inform the present, yet it is observed that new initiatives on the continent of Africa, specifically those led by the Gates Foundation’s Alliance for a New Green Revolution in Africa and agrochemical company, Monsanto, are repeating these mistakes. Numerous assessments of AGRA programs in East and Southern Africa determined that “here was no clear trend between the increased use of Green Revolution technologies and nutritional outcomes; instead, it depended on the particular historical, social and political context under which the changes took place. Gender and class relationships played a critical role in determining who gained from these technologies” . It is curious that contemporary initiatives continue to follow failed models.

Perhaps the repackaging and rebranding of past ideas has rendered old strategies unrecognizable to those who witnessed the first GR and are viewed as new innovations by those unaware of the 20th century program. Agribusiness corporatists have appropriated terms commonly used by grassroots and socially-focused organizations that promote food self sufficiency and bottom-up agriculture, conflating opposing ideologies and effectively convincing audiences and donors, as well as participants, that all programs which evoke the terms “food security” and “hunger alleviation” are equal. Often, vulnerable populations who, having been promised higher incomes and crop yields, readily convert lands for intensive agriculture before fully comprehending the complete and ongoing costs involved. Producing surpluses without the capacity to dry, preserve, store, and transport them results in food waste and continued missed opportunities for smallholders, having fruitlessly invested in exports without access to buyers. Such methods are in congruent with the fiscal realities and sociocultural orientations of the communities in which they are being promoted. Africa needs a new plan. Case studies of recent AGRA programs in Malawi and Tanzania demonstrate that costs are prohibitive for many smallholders, and those who take out loans or invest in upfront costs of participation neither see those costs offset by domestic nor export sales. In the best outcomes, smallholders quickly determined these programs are not profitable, lost interest, and had remaining capital to return to producing locally-preferred varieties. Export focused initiatives do not immediately address the local need for a greater diversity and quantity of food. Production-driven efforts for export and have been confused for hunger alleviation campaigns, but a corporate desire to prematurely push into new markets must not be mistaken for aid work. These initiatives carry multiple implied costs which go unmentioned in agribusiness marketing and branding schemes. In addition to royalty payments on proprietary seed, farmers pay for costly inputs and fertilizers, since such seed lacks resistance to local pests and climates. Industrial agriculture also produces waste contaminants in freshwater sources which need to be mitigated and managed using costly monitoring and purification technologies which rural Africa is unprepared to take on.

While it is easy to agree that there is a need to engage better techniques to improve food production and the distribution of surpluses in Southern and East Africa, it is not necessary that these approaches involve expensive proprietary seed. SADC nations need to adopt solutions which are tailored to the region, not blindly integrate those which have worked in the United States or for Asian Tigers. Addressing non-seed issues related to outputs, such as poor transportation, storage, and soils, for example, would have a substantially positive impact on food security for the continent. But the most immediately solution harnesses local talent and techniques and pushes these knowledge sets and applications across broader zones through state supported programming. Farmer-managed seed systems capture existing community assets, and governments can multiply their efforts. Intensive agriculture is a major driver of biodiversity loss, and predicted intensification of agriculture suggests major shifts in land use patterns and biodiversity . Agricultural intensification is characterized by increased chemical and mechanical inputs, limited non-crop vegetation, and lower levels of planned biodiversity . Although intensive agricultural production tends to erode biodiversity,grow bucket ecological communities provide substantial benefits to humans, such as suppression of crop pests . In many agroecosystems, insectivorous bats facilitate crop production by suppressing economically important insect pests . The negative consequences of intensive agricultural systems on biodiversity and ecosystem services have spurred the development of agroecological farming schemes that promote ecological interactions, lead to the provisioning of ecosystem services, and support biodiversity . Through the diversification of crops and habitats and the reduced use of pesticides, agroecological practices may improve habitat quality for insectivorous bats. These practices may increase bat dispersal across the landscape and provide more stable populations of insect prey, although bats in different functional guilds may have different responses to these practices. The addition of linear habitat – strips of perennial vegetation, such as treelines and hedgerows – can increase bat activity because many bat species utilize linear habitat as flyways for foraging and commuting . Linear habitats may reduce energy costs for commuting bats by providing shelter from wind and predators, increase foraging efficiency by concentrating insect prey, and serve as navigational aids . Open area bats are well-suited for crossing vast agricultural fields, whereas clutter adapted bats are more strongly associated with forest and tend to stay closer to linear habitat . Lower levels of pesticide applications and increased plant diversity may also improve foraging habitat quality for bats by providing a more abundant insect prey base, although this mechanism has not yet been tested. Insect communities are more abundant in organic systems with lower pesticide use levels . Intercropping, crop diversification, and the maintenance of non-crop vegetation can all help to maintain insect populations by providing a variety of insect habitat niches, which is especially important in annual cropping systems with frequent disturbances . Many studies that investigate the impact of agricultural intensification on bats focus on categorical comparisons of management intensity . These studies show mixed responses , perhaps because few studies consider both local farming practices and the effect of the surrounding landscape . Categorical comparisons are limited by the reality that farming practices likely vary within and may be shared among management intensity categories , making it difficult to pinpoint which practices drive observed patterns in biodiversity. Because bats respond to factors at both local and landscape scales , landscape context must be considered when evaluating the impact of local practices on bats.

Farms with similar practices may be spatially aggregated , making it difficult to disentangle the effects of local management practices from confounding landscape factors. A nested sampling design can be used to minimize variation in the surrounding landscape when evaluating the effect of local management intensity . Accounting for specific on-farm practices and minimizing variation in the surrounding landscape between paired farms provides a more nuanced understanding of which on-farm management strategies or practices are likely to impact bat conservation outcomes. Landscape-scale conservation efforts are important for bat conservation in agricultural landscapes , but may be challenging to coordinate among multiple private landowners . In productive agricultural regions, such as California’s Central Coast Region , the high cost of cropland encourages intensification, resulting in the conversion of perennial habitat to arable fields, the destruction of edge habitat, and simplified, homogenous landscapes . With little remaining natural habitat, few incentives for growers to restore habitat, and the challenges associated with coordinated grower participation, a focus on local management practices as conservation solutions may be a more effective approach than landscape-scale conservation efforts, although the efficacy of local practices may depend on the landscape surrounding the farm . We investigate how bats use farms compared to surrounding natural habitat, assess which local practices may benefit bats, and ask if the influence of local practices on bats depends on the surrounding land use. Specifically, we ask: 1) How do bat activity, species richness, diversity, and community composition differ among natural habitat, organic farms, and conventional farms? 2) Which on-farm management practices underlie any observed differences in bat activity, species richness, and diversity? 3) Which on-farm management practices influence insect abundance, and are these the same practices that influence bat activity? 4) Does the influence of on-farm management practices on bats depend on the amount of semi-natural habitat in the surrounding landscape? For each question, we explore bat activity for all bat species and by functional guild. We conducted acoustic surveys in the CCR and compared bat responses across site types and in response to local practices by comparing paired organic and conventional farms that vary in their adoption of agroecological farming practices. We hypothesized that focusing on specific practices would better explain bat activity, diversity, and richness than categorical comparisons between organic and conventional farms.We conducted research in the CCR, an economically and ecologically valuable area. Farms in the CCR produce 13% of vegetables in the USA . To understand how bats respond to agricultural intensification at the farm scale, we worked on farms and nearby natural areas in Santa Cruz, Santa Clara, San Benito, and Monterey Counties, CA within a 60 km by 70 km region . We selected woodland patches as natural habitat sites because remnant woodlands are important bat habitat in agricultural landscapes . Study sites in the CCR were selected to be representative of the range of farms and remnant woodland patches present in the study area using a combination of aerial imagery and based on the interest of private landowners and growers in participating in this research.

Pyrolysis temperature and time are both important variables in determining the properties of the final product

While biochar can be produced from a variety of feedstocks, the physical and chemical properties of biochar will vary depending on the type of feedstock and the pyrolysis process used to produce the material . Pyrolysis temperature refers to the highest treatment temperature achieved during the pyrolysis process and can range between 200 and 1000 °C . Additionally, various biochars show divergent effects on soil microbial activity, transport, and diversity, likely caused by indirect changes to the soil’s chemical properties . While not yet well investigated, both the feedstock and HTT will likely affect the suitability of biochars as carrier materials. The objectives of this study were to compare biochar materials to standard carriers with respect to promoted inoculum survival. Improved survival was related to physico-chemical properties of the biochar materials. Ideal physico-chemical properties were recognized and related to either feedstock or pyrolysis temperature. Overall, specific biochar feedstocks and production methods are identified for optimizing biocharinocula formulations.Carbon and nitrogen analysis was performed on a FlashEA 1112 Elemental Analyzer . Permanganate oxidizable carbon was determined using the method described by . Biochar pH and electrical conductivity measurements were determined using previously described methods . Briefly, 1 g of biochar was suspended in 20 mL deionized water and left shaking at 180 rpm for 1.5 h. The pH was measured using an Accumet® basic AB15 pH meter and electrical conductivity readings were determined using an Accumet® model 20 pH/conductivity meter . Biochar surface hydrophobicity was determined for dry, fresh biochar sieved through a 0.5 mm mesh using the molarity of an ethanol drop test . The MED values from 1– 2 indicate hydrophilic samples, 3– 4 are slightly to moderately hydrophobic, and 5-7 are strongly to extremely hydrophobic. As recommended by the International Biochar Initiative ,hydroponic nft channel specific surface areas were determined using the Brunauer, Emmett, and Teller N2 method on an ASAP 2020 Physisorption Analyzer as outlined in the Active Standard ASTM D6556 .

The percent water holding capacity for the carriers were determined after the materials were saturated in water for 24 hours, then allowed to air dry for 1 hr. Values for %WHC were calculated using the mass of water retained in the material per g dry material x 100. The physical structure and surface pore-opening diameters for the 300°C biochars were visualized using a Hitachi TM 1000 tabletop environmental scanning electron microscope . Pore-opening diameters were measured using TM-1000 software . Enterobacter cloacae UW5 was generously provided by Dr. Cheryl Patten . Microbial cultures were grown at 30°C, shaking at 170 r min-1 , in Luria-Bertani medium , unless otherwise specified. Electrocompetent UW5 cells were prepared using methods described by . UW5 cells were transformed with a rhizosphere stable plasmid, pSMC21, carrying a bright mutant of green fluorescent protein , provided here by Dr. Yanbin Guo. Transformation was carried out using 500 ng plasmid combined with100- 200 µL of competent cells, electroporated at 2.5 kV, 25µ , 250Ω, sing a 2 mm gap c vette in a Biorad GenePulser . Integration of plasmids was verified by selection on kanamycin medium and by microscopic observation of GFP expressing cells. Fluorescent microscopy visualization was performed on an Olympus IX71fluorescent microscope scope using a light excitation range 533– 583 nm, with an emission range of 607– 684 nm. The quantity of indole compounds produced by transformed cells was compared to that of wild type UW5 using Salkowski reagent and the S2/1 method described by . The UW5-pSMC21 transformants were screened for growth inhibition using growth curve analysis on a nutrient rich LB media and a carbon and nitrogen starvation response media prepared according to . The stability of plasmid pSMC21 in strain UW5 was assayed over a 2 week period. Cells were transferred daily to fresh Voigt media without kanamycin and at 3 d intervals cultures were serially diluted and spread onto plates with and without kanamycin. The percent of cells retaining plasmids was calculated based on differences in CFU counts on these plates.An Arlington sandy loam, collected from a field with previous agricultural history from the University of California, Riverside , was passed through a 4 mm sieve and used for all treatments. To prepare the liquid inoculum, UW5-pSMC21 cultures were grown overnight to late log phase in LB + kanamycin.

Cultures were washed twice with sterile 0.85% NaCl using 30 min centrifuge steps at 4000, 4°C. Washed cell pellets were brought to ½ initial culture volume with sterile 0.85% NaCl. This constituted the liquid inoculum, final cell density of 5.6 X 109 ± 0.3 CFU ml-1 , that was used for all treatments. Twenty milliliters of liquid inoculum were left shaking at 25 °C for 24 h with 2 g of carrier material in 125 ml flasks. Treatments were prepared by thoroughly mixing inoculated carriers with 20 g soil or by mixing 20 ml liquid inoculum directly into soil, providing a final carrier application rate of 1% . Four replicate microcosms were prepared for each treatment soil in 200 ml plastic cups with drainage holes and foam tops to allow water and air flow. DNA was extracted from each replicate after the initial inoculation. Microcosms were weighed daily and watered with deionized water to maintain microcosms at 60% field capacity. After 4 weeks a second round of DNA extractions were performed on all replicate microcosms. The soil DNA extractions served as templates for qPCR used to quantify GFP gene copy numbers.DNA was extracted from 0.25 g of soil using the PowerSoil® DNA isolation kit from MoBio Laboratories with modifications . All extractions were tested for purity and concentration using a NanoDrop 1000 . All qPCR reagents, protocols, and data analysis were performed within the standards outline by the MIQE guidelines . Reactions were set up using the SsoAdvanced universal SYBR® Green supermix and were run on a MyiQ® Thermal Cycler . For the survival study, GFP primers and qPCR cycle conditions and melt curve analyses were identical to those described in Chapter 2. All qPCR reactions involving sample DNA or control DNA templates were prepared in duplicate. All of the biochar materials tested here were shown to be useful as inoculum carriers for the PGPR strain E. cloacae UW5, but also varied in their efficacy. This appeared to be based on differences in the chemical and physical properties of the individual biochars. Among the materials, Pine600 was identified as the best biochar for use as an inoculum carrier. It performed as well as the industry standard carrier, peat moss,nft growing system and its use resulted in higher sustained population densities than did vermiculite. All biochars tested performed as well as vermiculite and none demonstrated detrimental effects on the UW5 population. Peat moss supported the highest cell density in samples analyzed after the 24 h inoculation procedure and also promoted the greatest survival after the 4-week soil incubation, which was slightly higher than that of the Pine600.

This was associated with the high availability of labile carbon and high nitrogen content of the peat . To identify specific characteristics that related to the survival outcomes, the biochars were assessed based on several chemical and physical parameters. All characteristics analyzed were highly variable among the various biochar materials, which is consistent with previously reported findings . The pyrolysis temperature had the greatest effects on pH and SSA, whereas feedstock type largely determined the % WHC of the biochars. Biochar pH and C:N ratio had the greatest effect on initial GFP copy numbers, which reflect the direct effect of the carrier on the inoculum during preparation. The population density was fit to pH via a Gaussian distribution, which identified an optimal pH range for biochar as an inoculum carrier for the test strain. After inoculation, the Pine300 biochar, which had a pH of 4.63, the lowest of the biochars, also supported the lowest starting cell density . However, after 4 weeks in the soil, this material supported cell densities that were similar to those supported by the other biochars and vermiculite. Also, when cell densities were compared after the 4-week incubation, there was no correlation between the biochar pH and survival. Hence, while the pH may have been initially influential, after application to the soil, this effect was no longer detected. Other variables associated with higher initial population densities were related to nitrogen in the char, lower C:N ratios and higher N contents. Saranya et al.also observed a positive influence of nitrogen when testing the shelf life of Azospriliium lipoferum soil inoculants on various biochars. However, in the present study, there was no relationship between biochar nitrogen contents and cell densities after 4 weeks of incubation. We also noted that the top performing carriers, Pine600 and peat, were moderately to strongly hydrophobic when tested as a dry materials, yet they have high % WHC’s. The hydrophobicity was assayed on dry materials, but % WHC values were obtained after 24 h of saturation. Hence, the hydrophobicity of the dry biochar does not appear to be a key concern when evaluating the utility of biochar as an inoculum carrier. This also indicates the importance of sufficient inoculation periods to ensure infiltration of the material if using liquid inoculum. Survival of the introduced PGPR strain UW5 after 4 weeks in non-sterile soil was strongly correlated with the C:N ratios of the biochar materials. Soil C:N ratios can influence soil microbial community composition and in particular have shown positivecorrelations with total phospholipid fatty acids . In agreement with this finding, a recent study demonstrated a positive relationship between the C:N ratio of biochar-amended soils and soil total PL A’s and bacterial PL A’s, in particular . However, Jindo et al.report a negative correlation between C:N ratio and bacterial biomass in biochar-compost mixtures. Altogether these findings indicate that biochar application will influence soil C:N ratios, and that C:N ratio will have an important effect on soil bacteria, but that this effect may be inconsistent across variable soil types. Several other parameters were related to week 4 survival when fit to Gaussian models. In particular, biochars having SSA’s, pore-opening diameters, and % WHC’s in the mid ranges maintained greater UW5 population sizes.

These physical characteristics depend on the surface structure of the biochar materials. Two of the biochars, Pit600 and Shell600, had the highest SSA’s b t did not res lt in improved inoculum survival. Previous research demonstrated that biochars prepared from the same feedstocks had increasing microporosity and SSA’s with increasing final HTT’s . These materials may have a large volume of nano– micropores, which are not accessible to bacteria and thus do not reflect the functional carrier capacity of the material. In fact, macroporosity often makes up only a small portion of the total surface area on biochar particles . The pore-opening diameters will determine which fauna are excluded from the biochar interior pore space and whether they are accessible to bacterial inoculants. Here we only visualized the pore-openings of the 300°C biochars, based on the ass mption that the higher HTT’s will have a significant effect on micro to nanoporosity which was measured by SSA, not the macropores we visualized using ESEM. The materials closely resemble that of the feedstock at a cellular level, as has been reported previously . The biochars with pore-opening diameters between 26 and 46 µm were ideal. Pores in this size range could play a significant role in protecting pre-established colonies from predation. Overall, pretreatment of chars can change some of their chemical properties but, unless blocked, pore-openings are not easily distorted. Thus, the physical properties and surface features of a potential feedstock should be an important consideration when selecting a biocharbased carrier. The effect of PGPR on native microbial communities will significantly impact its utility as an agricultural soil amendment. To assess the bio-availability of residual phosphorus in biosolid-derived biochars it is important to consider the mineral phosphate solubilizing microbial community. In soils phosphate is frequently complexed by calcium , iron , or aluminum , making it insoluble and unavailable to plants . This is also the case with the residual P in biosolid-derived biochar studied here, where P is predominantly complexed as Al and possibly Ca phosphates .

Heatmaps were generated using the agglomerative clustering analysis in nSolver software

Wild resources are gathered for ritual purposes or to provide nutrients inadequately supplied by cultigens; other categories of use, such as medicinal plants, are of constant but low-intensity demand. In India today, parts of the acacia species Acacia nilotica are used as medicine , as are the leaves and juice of chenopodium . Other medicinal plants include Acacia catechu, used as an astringent, Acacia leucophloea, which provides a medicinally-useful gum, and Achyranthes aspera . Nineteenth-century documents from the Deccan region show that forest resources and other non-cultigens were also used as fodder, fuel, resins, dyes, and tannins, sources of lac and wax, and timber for crafts and structures . These flora would have been available within a 20-kilometer radius of Kaundinyapura in Early Historic times, when forest resources were more abundant than at present . The presence of Acacia arabica and Acacia cf. nilotica in the archaeobotanical record of Early Historic India confirms that the plant was known, available, and used. Acacia nilotica, also known as “Indian gum arabic,” is a multipurpose source of fuel as well as being suitable for load-bearing components like handles and cart-axles. Ethnographic observations show that the pods are eaten by cattle, goats, and sheep; the gum is used in printing and dyeing of cotton and silk; the bark can be treated to render a substitute for soap; and unripe pods are used for ink . Applying these examples as a model for the premodern era at Kaundinyapura, tannin extracted from Acacia leucophlaea, Acacia nilotica, and Anogeissus latifolia could have been used to convert surplus domestic animals into hides suitable for exchange,blueberry grow pot while cloth from locally-produced cotton could have been enhanced by dyes from wild plants such as Carthamus tinctoris as well as several Acacia species.

Agricultural workers are at increased risk for developing various respiratory diseases including chronic bronchitis, asthma, and COPD, due in part to exposure to respirable organic dusts associated with these environments . Individuals that work in concentrated animal feeding operations, such as those housing swine, have appreciably increased risk for negative lung health outcomes . Therapeutic options for affected individuals are limited, with no current treatments to reverse lung function decline associated with these ailments . Thus, novel treatment strategies that harness and/or promote reparative processes in the lung are necessary. It is increasingly appreciated that inflammation resolution is an active process and regulated by a variety of pathways and mediators, some of which involve omega-3 and omega-6 polyunsaturated fatty acids . As ω-3 PUFA are essential fatty acids that cannot be synthesized de novo by humans, dietary consumption of ω-3 PUFA dictates the tissue availability for these fatty acids and mediators derived from them. In a typical Western diet, ω-3 PUFA intakes are below recommended guidelines, while ω-6 PUFA intakes are high . Conversely to ω-3 PUFA, ω-6 PUFA are metabolized into lipid mediators that are largely involved in the induction of inflammatory processes . Thus, individuals consuming a diet with a high ω-6: ω-3 PUFA ratio may be at increased risk for inadequate control of inflammatory processes, with increased substrate to produce pro-inflammatory lipid mediators and a dearth of substrate for the production of specialized pro-resolving mediators . We have recently assessed the efficacy of dietary supplementation with the ω-3 PUFA docosahexaenoic acid on altering the lung inflammatory response and recovery following acute and repetitive organic dust exposure . Mice were fed a mouse chow supplemented with DHA for four consecutive weeks prior to challenge with a single DE exposure or DE challenge over 3 weeks . In these investigations, we identified impacts of a high DHA diet on lung inflammation, including alterations in macrophage activation, that were overall protective against the deleterious impacts of DE exposure. However, these studies were limited in that they only assessed the impacts of one ω-3 PUFA, DHA, on male sex and on a limited dietary regimen of 4–7 weeks .

Sex-specific differences in respiratory symptoms are observed among the asthmatic individuals and agricultural workers with asthma being more common in women than men and respiratory symptoms being more prevalent in men than women among the farmers . To better assess the impacts of a high ω-3 PUFA diet on the lung inflammatory response to DE and achieve a total tissue ω-6: ω-3 PUFA ratio of ∼1:1 that is considered ideal, we have now utilized the Fat-1 mouse transgenic model to better assess the sex-specific impacts of ω-3 PUFA on DEinduced inflammation. These mice express the Caenorhabditis elegans fatty acid desaturase gene that converts ω-6 PUFA to ω-3 PUFA, thus yielding an overall tissue ratio of ∼1:1. We hypothesized that use of this model would enhance the protective effects identified in initial studies utilizing only DHA supplementation, while also overcoming study limitations that plague fatty acid supplementation investigations, including ambiguous outcomes due to different fatty acid sources, purity, doses, and duration of supplementation . In addition, we have also tested a strategy to further enhance the efficacy of ω-3 PUFAs through the use of a therapeutic inhibitor of soluble epoxide hydrolase an enzyme that metabolizes lipid mediators such as SPM into inactive or less active forms . Through these investigations, we have clarified a role for ω-3 PUFA in regulating the initiation of lung inflammation following DE inhalation and identified differentially regulated genes in repair and recovery following these exposures. These studies warrant consideration of ω-3 PUFA supplementation as a complementary therapeutic strategy for protecting against the deleterious lung diseases associated with environmental dust exposures, such as those experienced by agriculture workers.Settled dusts in closed swine confinement facilities were collected one foot above the ground and kept at −20°C. Dust extracts were prepared as previously described . Briefly, 5 g dust was mixed with 50 ml Hank’s Balanced Salt Solution at room temperature for 1 h. The mixture was then centrifuged at 2,500 rpm for 20 min at 4° C, supernatant was centrifuged one more time and resultant supernatant was sterile filtered using a 0.22 μm filter. Extracts were aliquoted, labeled as 100% dust extract and kept frozen at −20° C. A 12.5% DE solution was prepared for use in mouse intranasal instillations by diluting the 100% extract with sterile saline. Detailed analyses of the DE have been performed previously .

A previous study compared immune response to the agricultural dust administered via intranasal instillation and 100 µg LPS challenge in mice , which has been estimated to be approximately 250× more than the LPS in 12.5% DE. In this same study, the mean endotoxin levels have been reported to be 0.384 μg/ml. Given this finding, when we administer 50 µL DE via intranasal route, this would correspond to approximately 20 ng LPS. In addition, other studies report respirable LPS levels to be between 14–129 EU/mL .At the end of the three-week period, mice were euthanized, and trachea were cannulated to obtain bronchoalveolar lavage fluid from each mouse. Collection of BALF included three times washing with 1 ml PBS each time. All washes were centrifuged at 1,200 rpm for 5 min. While the first wash was kept separate, the second and third washes were combined before centrifugation. The supernatant from the first wash was aliquoted and stored in −80° C for cytokine profiling. The pelleted cells obtained from all the washes were combined and counted. Cytospin slides were prepared using 100,000 cells, stained with Diff-Quik kit and differential cell counts were obtained as described before .For histopathological assessments,hydroponic bucket lungs were inflated with 10% buffered formalin at 15 cm pressure. The same mouse lungs that were lavaged with PBS to obtain BALF was used for histology. Fixed lungs were transferred into 70% ethanol and then shipped to UC Irvine Pathology Research Services Core for paraffin embedding, sectioning, and Hematoxylin and Eosin staining. The observer was blinded to the identity of each slide. A lymphoid aggregate was defined as close aggregation of ≥20 lymphocytes. Alveolar cellularity was evaluated by the number of cells in the alveolar spaces in the lung parenchyma in a total of five images obtained throughout the whole lung using 40× objective with 150% optical zoom. The resulting five values were averaged per tissue section. Each histopathological evaluation was represented as percentiles and a score between 0-to-4 was assigned for each percentile.A mouse NanoString Immunology panel was purchased for direct counting of 561 RNA transcripts using a nCounter Sprint Profiler. Each mouse lung was immediately put into 1 ml of RNA Later, kept at 4°C overnight, then stored in RNA Later at −80° C until RNA extraction. A total of three male mouse lungs per group obtained from three independent studies were thawed for RNA extraction. After lung samples were rinsed in sterile PBS, they were homogenized in 1 ml of Trizol using a 7 cm polypropylene pellet pestle in a microtube, then the extraction was performed as per manufacturer’s instructions using a PureLink RNA mini kit . RNA integrity number was obtained for each sample at the UC Riverside Institute for Integrative Genome Biology Core Facility using an Agilent Bioanalyzer 2,100 . Samples were prepared by a 16-h hybridization step of 50 ng RNA with the codeset probe provided in the Immunology panel. At the end of the hybridization, samples were diluted with nuclease-free water to 35 μL, and 32 µL of each sample was loaded onto a nCounter Sprint Cartridge. Given each cartridge can hold up to 12 samples, a total of 24 samples were run on two cartridges. All samples passed the QC test without any QC flags. The data resulting from each run were combined and analyzed together using nSolver 4.0 and NanoString Advanced analysis. On the nSolver software, gene expression data were normalized using ten housekeeping genes that showed strong correlation with each other, these included Rpl19, Alas1, Ppia, Oaz1, Sdha, Eef1g, Gusb, Gapdh, Hprt and Tbp.

For the advanced analysis, at least three housekeeping genes whose expression correlated well with each other , and thus ideal for normalization of the data were used to normalize the raw data , and a 77 transcript counts were taken as “count threshold”, which is two times the highest background to-noise ratio . Advanced analysis produced differential expression analysis, gene set analysis, and pathway scores. Differential expression outcomes identified the top 20 most upregulated genes among all the treatments, while gene set analysis displayed which pathways those most upregulated genes are related to. To further explore the protein-protein interactions among differentially regulated proteins, we used the STRING database of genes that were statistically significant based on the unadjusted p-values . Genes with low counts were not included in any of the analyses. The lungs are continually exposed to harmful stimuli found in the air, including environmental dusts, diesel exhaust particles, and smoke exposures. The ability of the airways to respond to these stimuli and repair damage caused by the exposures is vital to respiratory health, because unrepaired damage can lead to debilitating airway diseases . Long-term particulate matter exposures have been consistently linked to negative cardiovascular and lung health outcomes and increased mortality . Disease susceptibility caused by chronic inhalation of particulates is clearly evidenced by occupational exposures such as those seen in agriculture workers; exposures to livestock farming operations are consistently linked to increased respiratory symptoms and inflammatory lung disease in not only workers, but in individuals living in the surrounding communities, including children and adults . Approximately two-thirds of agriculture workers report respiratory disease; 50% of agriculture industry workers experience asthma-like symptoms ,25–35% of individuals working in concentrated animal feeding operations experience chronic bronchitis , and the prevalence of chronic obstructive pulmonary disease among agriculture workers is doubled compared to non-farming working control subjects . Curative options are not available for these workers, with current therapeutic options aimed primarily at symptom management and the prevention of lung disease. To improve treatment options for this population, studies investigating therapeutic mechanisms to stimulate endogenous lung tissue repair mechanisms are warranted. To this end, we have assessed the impacts of a low ω-6: ω-3 PUFA total body tissue ratio on lung inflammation following repetitive exposure to inhaled environmental dusts, using a well described mouse model of DE inhalation. In addition, we have explored the therapeutic utility of an sEH inhibitor, TPPU, in enhancing the impacts of high ω-3 PUFA tissue levels, including exploring its effects in regulating SPM levels during inflammation resolution.

Leaves inoculated with UV or chemically treated TMGMV showed no visual signs of infection in all three species

The elution profile was consistent with native TMGMV and native or treated TMGMV particles eluted at ~8 mL from the Superose 6 column . As a complementary method, DLS was used to determine the hydrodynamic radius of TMGMV; DLS provides insight into the TMGMV formulation and its possible aggregation state, albeit an estimated measure given the high aspect ratio shape of TMGMV. DLS revealed signs of particle breakage when UV-TMGMV was treated with high doses of UV light . There was a trend that the average hydrodynamic radius of TMGMV decreased from 125 nm to 112, 102, 99, 91 and 78 nm with increasing UV doses of 0, 1, 5, 7.5, 10, and 15 J.cm-2 , respectively. DLS also revealed signs of particle aggregation in the βPL-TMGMV formulations ; compared to native TMGMV , βPL-TMGMV recorded hydrodynamic radii between 165 and 215 nm in samples treated with 0, 100, 500, 750, 1000, and 1500 mM βPL. In contrast, formalin treated TMGMV showed no signs of particle breakage nor aggregation with average lengths of 125 to 129 nm in samples treated with 0, 100, 250, 500, 750, and 1000 mM formalin .In TEM images, the polydispersity of TMGMV was previously reported218 and was attributed to the methods used to produce and purify TMGMV, as well as to prepare the TEM grid samples – during the drying process the particles are likely to break . TEM data concurred with the observations made by DLS. While the native TMGMV averaged a size of 180 ± 76 nm, the UV-TMGMV revealed minor signs of breakage, and Form-TMGMV retain its structural integrity. βPL-TMGMV did not show sign of aggregation but rather formed head-to-tail self-assembling filament. This phenomenon was previously reported using TMV assisted by aniline polymerization, and was attributed to a combination of hydrophobic interactions, electrostatic forces between the dipolar ends of adjacent particles.We hypothesize that the acylation and alkylation of amino acid residues toward the opposite ends of TMGMV promotes such interactions. Next, we assessed the RNA state after UV, βPL, and formalin treatment.

TMGMV contains a positive-sense,nft hydroponic system single-stranded RNA genome of 6355 nucleotides and contains more than 400 sites of adjacent uracils prone to dimerization . Overall, UV-visible spectroscopy indicated that the RNA to protein ratio of βPLTMGMV and Form-TMGMV remained close to 1.2, indicating no degradation or loss of RNA – as expected . UV-TMGMV suffered from an increase in the 260:280 ratio from 1.2 to 1.3. We attribute this change to coat protein breakage, as was observed in the gel electrophoresis experiments . SDS-PAGE gels were imaged following staining for proteins and nucleic acid under white light and UV light. While the coat proteins of TMGMV are ~17 kDa in size, a second protein band was observed in the UV-TMGMV treated samples, and its intensity increased with UV dosage. It should be noted that free coat protein was not detectable by SEC ; therefore, the smaller coat protein may be partially broken yet still be assembled in the nucleoprotein complex. We attempted to identify the amino acid sequence of the ~14 kDa and ~17 kDa bands by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry , however, we were unable to clearly resolve the bands and thus could not obtain pure samples for analysis. Denatured βPL-TMGMV coat proteins showed no sign of protein breakage or aggregation regardless of the dose of βPL used during the treatment. In contrast, the higher the dose of formalin, the more inter-CP crosslinking was observed, as indicated by the presence of an additional band with high molecular weight. GelRed staining of the RNA content of TMGMV particles, revealed no significant changes in RNA motility in βPL-TMGMV and form-TMGMV samples, but signs of RNA breakage in samples treated with UV doses above 1 J.cm-2 . The genome content of each formulation was further analyzed following RNA extraction from the TMGMV formulations on native agarose gels . We observed that treatment doses higher than 1 J.cm-2 of UV, or 10 mM of βPL and 100 mM formalin led to significant RNA damage and a decrease in total RNA recovery. Based on these biochemical data, we hypothesized that a minimum of 5 J.cm-2 of UV light, 100 mM βPL, and 500 mM of formalin would have been required to inactivate TMGMV; at these concentrations, the overall structural integrity of the particles was maintained, but RNA damage was visible.

Tn86, Samsun-NN, and TSA were seeded and maintained in a greenhouse and challenged with native or UV/chemically treated TMGMV when the plants were about 30 days old; fully developed new leaves were mechanically inoculated by gently abrading with a Q-tips swab dipped in native or inactivated TMGMV. Five plant replicates were inoculated for each treatment condition in addition to a negative control . Leaves were imaged and harvested individually ~20 days post-inoculation .In addition to visual inspection for symptoms, RT-PCR was carried out on the total RNA content extracted from individual leaves to further attest for the presence of TMGMV infection or lack thereof . A total of three leaves per treatment condition was selected randomly and analyzed by RT-PCR. This method is a more sensitive assay as opposed to visual inspection of the leaves; for example, visual inspection of the leaves may indicate a lack of apparent infection when using 5 J.cm-2 of UV, 500 mM βPL, or 500 mM formalin in either plant species tested . Yet, at these concentrations, the leaves were TMGMV positive in Tn86 and TSA. Agarose gel electrophoresis confirmed the inactivating UV dose was consistent amongst the 3 plant species tested . While 750 mM βPL was enough to inactivate TMGMV in Tn86 and Samsun-NN, 1500 mM was required to prevent TMGMV infection in the hypersensitive TSA. Therefore, one could inactivate TMGMV using 750 mM βPL and still use it as a bioherbicide with high specificity against TSA; which may be an interesting extension of the current formulation. Formalin was the least consistent treatment modality and required doses varying from 1000 mM, 250 mM, and 750 mM to inactivate Tn86, Samsun-NN, and TSA, respectively. Overall, the required treatment doses to prevent infection in all three plant species were 10 J.cm-2 UV, 1.5 M βPL 1 M formalin. However, given the variability of formalin dosage needed to achieve inactivation, this may be the least favorable to use for commercialization. All three treatment modalities have their own set of advantages and disadvantages to produce inactivated TMGMV for safe agricultural and environmental applications . UV treatment is the cheapest, fastest and most reproducible inactivation modality, but leads to shortening of the particles; 10 J.cm-2 UV-TMGMV particles are on average 30 nm shorter than native TMGMV .

In contrast, βPL maintains particle integrity, although it leads to end-to-end alignment of TMGMV; furthermore, βPL is an expensive and biohazardous chemical; the chemical treatment also requires additional purification steps therefore reducing yields by 40-60%. Similarly, formalin maintains particle integrity but requires a long treatment incubation ; the additional purification steps required to remove the treatment reagents are also at the cost of lower yields . Lastly, formalin treatment gave the least consistent inactivation results among different plant species, and therefore may require careful optimization for each species of interest. Altogether, UV inactivation may be the most suitable; it could be easily integrated into the purification process. As previously mentioned, the inactivation of TMGMV by UV light has been reported in the 20th century using the focal lesion quantification method.267,271 These studies reported using different sources of UV light with various intensities and power settings, which makes it difficult to compare the results. In addition, the time of UV exposure was recorded to assess UV inactivation instead of the more accurate J.cm-2 units of measure; for example, Ginoza et al. reported full inactivation of TMGMV after 2 min of UV exposure, while Streeter et al. stated that a 6 min exposure was required. Using our system, 2 min and 6 min of UV exposure would correspond to ~1 and ~2.5 J.cm-2 , respectively. At these concentrations, the leaves would appear symptomless but RT-PCR revealed the presence of infectious TMGMV . The plant virus cowpea mosaic virus has been shown to be inactivated at UV doses of 2.5 J.cm-2 . CPMV consists of a bipartite ssRNA virus forming a 31 nm icosahedron with pseudo T=3 symmetry. The differences in UV dose required to yield inactivated virus preparations can be explained by differences in virus structure and assembly: CPMV’s ssRNA genome is encapsulated into the internal cavity of the capsid; in contrast, TMGMV’s genome is incorporated into the nucleoprotein assembly – thus the TMGMV is somewhat buried in the coat protein structure, which likely confers enhanced stability. The reported inactivation of mammalian viruses such as Influenza HIV , Hepatitis A required lower doses,hydroponic nft system most likely due to a higher propensity for uracils in their genome to dimerize. βPL and formalin are more commonly used to produce non-virulent mammalian virus vaccines.Compared to plant viruses, many mammalian viruses have a lipid envelop that can be cross linked by formalin or acylated/alkylated by βPL; thus they generally require lower treatment doses to be inactivated. For example, the equine herpesvirus type I279, eastern equine encephalitis and poliomyelitis type II280, HIV281, and the influenza virus were successfully inactivated with 5-60 mM βPL. Hepatitis A , Japanese encephalitis virus , HIV281 , influenza A virus, and rabies were also successfully treated with 5-120 mM of formalin. It is the structural integrity of TMGMV that makes it attractive for exploitation in nanoengineering and environmental applications; however, these same features make it harder – yet not impossible – to generate inactivated TMGMV preparations, yet the dose requires vs. mammalian vaccine development is about 10x fold higher.

Nanoparticle carriers are used for targeting chemotherapies and immunotherapies to tumors to increase tissue specificity and effective payload delivery with reduced systemic adverse effects. Most nanoparticle-encapsulated cancer therapeutics are delivered to the tumor site by exploiting the local tumor environment consisting of the combination of leaky vasculature and deficient lymphatic clearance, i.e., enhanced permeability and retention . Some strategies also exploit the targeting of disease-specific molecular signatures, as yet no targeted nanoparticle has been translated into clinical treatment. If a target site can be identified, then the carrier diffusion and distribution of the delivered payload are critical to treatment success. Nanoparticles injected in the systemic circulation target either the vasculature or the periphery of the tumor. Limited nanoparticle-carrier diffusion can prevent drug accumulation to a lethal concentration in the tumor tissue and therefore promote cancer cell survival. Surviving cancer cells often become more aggressive and develop a drug resistance phenotype.Here, I develop the basis for quantitative analysis of nanoparticle diffusion and uptake in a solid tumor. Nanoparticle size and shape as well as surface chemistry determine the fate of the carrier and its efficacy. A growing body of data shows increased tumor homing and tissue penetration with elongated, rather than spherical, nanomaterials. Elongated, rod-shaped or filamentous nanoparticles have enhanced margination and increased transport across tissue membranes. Geng et al.demonstrated that virus-like filomicelles with higher aspect ratios than spherical particles deliver the chemotherapeutic drug paclitaxel to human-derived tumor xenografts in mice more effectively and with increased efficacy. Chauhan et al.compared the intratumoral diffusion of bio-stable colloidal quantum dots as nanorods and nanospheres with identical charge and surface coating. Nanorods penetrated tumors 4.1 times faster than nanospheres of the same hydrodynamic radius and occupied a tumor volume 1.7 times greater. Correspondingly, we found that filamentous potato virus X compared to spherical cowpea mosaic virus has enhanced tumor homing and tissue penetration, particularly in the core of the tumor.Contradictory results were obtained by Reuter et al., who compared sphere-like and rod-shaped nanogels using PRINT technology. They observed that smaller nanospheres had 5 fold greater tumor accumulation compared to higher aspect ratio nanorods. I hypothesize that this difference may be due to the different tumor model used. It has been previously shown that differences in tumor vasculature affect shape-dependent nanoparticle extravasation.In addition, other factors may have influenced the results, such as the differences in surface charge and aspect ratio . Therefore, there is a need to investigate the mechanics of diffusion and accumulation of high aspect ratio nanoparticles within the tumor microenvironment.

NK105 demonstrated efficacy in patients with advanced gastric cancer that failed to respond to chemotherapy

To our knowledge, only one PEGylated drug has been approved for veterinary applications. This is Imrestor, a PEGylated granulocyte colony-stimulating factor, which was approved in 2016 to increase the number of circulating neutrophils in cows and thus prevent breast tissue inflammation .Although PEGylated drugs have been successfully translated to the clinic, a growing body of literature has highlighted the increased presence of PEG-specific antibodies in the general population due to the extensive use of PEG in cosmetic and pharmaceutical products, correlating with the declining therapeutic efficacy of PEGylated active ingredients.This issue is being addressed by the development of alternative polymer-drug conjugates . In the agricultural industry, polymeric seed coatings are used to control pests and diseases that would otherwise inhibit germination and growth.Coating seeds increases their viability, reduces the risk of the active ingredient leaching into the environment, and minimizes off-target toxicity to other organisms compared to free pesticides. More than 180 coating formulations have been reported, including chitosan, polyvinyl acetate , polyvinyl alcohol, PEG, ethyl cellulose, and methyl cellulose.On the market, the majority of seed coating technologies have been developed by Bayer Crop Science, BASF, Corteva, Monsanto, Syngenta, Incotec/Croda, and Germains. Micelles are composed of amphiphilic surfactant molecules that spontaneously aggregate into spherical vesicles in an aqueous environment. This phenomenon is only possible if the quantity of the surfactant molecules is greater than the critical micelle concentration. The core of the micelle is hydrophobic and can sequester hydrophobic active ingredients. The size of the micelle and therefore the amount of active ingredient that can be loaded in its core is dependent on the molecular size, geometry, and polarity of the surfactant.

The small size of polymeric micelles reduces their recognition by scavenging phagocytic and inter-endothelial cells located in the liver and spleen, respectively,garden grow bags and therefore increases the bio-availability of the active ingredient.Most micelles are made of block co-polymers with alternating hydrophilic and hydrophobic segments, and the ratio of drug molecules to the block co-polymers determines their properties. Micelles are often composed of PEG, PLA, PCL, polypropylene oxide, poly-Llysine, or combinations of the above. Estrasorb was approved by the FDA in 2003 as a topical lotion, and consists of micelles designed for the transdermal delivery of 17β-estradiol to the blood for the treatment of menopausal-related vasomotor symptoms. This administration route evades first-pass metabolism, achieving stable levels of 17β-estradiol in the serum for 14 days. Furthermore, paclitaxel and docetaxel are commercially available formulated as micellar nanocarriers, thus avoiding the use of Kolliphor EL as a solvent.Various micellar nanocarriers are currently undergoing clinical trials . For example, NK012 is a micellar polyglutamate-PEG formulation covalently bound to the antineoplastic topoisomerase inhibitor SN-38 via an ester bond. SN-38 is slowly released from NK012 by the hydrolysis of the ester bond under physiological conditions, which increases the SN-38 half-life to 210 h. NK012 is undergoing clinical trials for the treatment of solid tumors, triple-negative breast cancer, colorectal cancer, and small-cell lung cancer.Similarly, the NK105 micelle is being investigated for the delivery of paclitaxel to breast cancer, gastric cancer, and non-small-cell lung cancer. NK105 polymers consist of PEG as the hydrophilic segment and modified polyaspartate as the hydrophobic segment.Genexol-PM is a micellar nanocarrier consisting of mPEG-block-D,L-PLA for the delivery of paclitaxel for the treatment of non-small-cell lung cancer, hepatocellular carcinoma, urothelial cancer, ovarian cancer, and pancreatic cancer.

Genexol-PM was shown to behave similarly to the FDA/EMA-approved nanocarrier Abraxane and has been approved for the treatment of metastatic breast cancer and advanced non-small-cell lung cancer in South Korea. NC-6004 is being investigated for the delivery of cisplatin to head and neck cancer as well as non-small-cell lung cancer. NC-6004 demonstrated a significant reduction in cisplatin-induced neurotoxicity and nephrotoxicity . Micelles are also being investigated for the treatment of cystic fibrosis, metabolic syndrome, psoriasis, and rheumatoid arthritis.In veterinary medicine, a randomized trial was initiated in 2013 to investigate the safety and efficacy of micellar paclitaxel for the treatment of dogs with grade II or III mast cell tumors .The micelle consisted of a surfactant derivative of retinoic acid . Dogs treated with micellar paclitaxel showed a three-fold higher treatment response compared to a control group receiving the standard-of-care drug lomustine. However, the FDA conditional approval of Paccal Vet-CA1 was withdrawn in 2017 by the manufacturer Oasmia Pharmaceutical AB to allow them time to study lower doses in order to reduce adverse effects such as neutropenia, hepatopathy, anorexia, and diarrhea. In a different application, a micellar vitamin E has been tested as an antioxidant in race horses undergoing prolonged aerobic exercise to prevent exercise-induced oxidative lesions, and maintained the general oxidative status to a healthy level for horses undergoing intensive training.Micelles have also been developed as promising nanocarriers for the encapsulation of pesticides, helping to prevent adsorption to soil particles. Examples include the micellar encapsulation of azadirachtin,carbendazim,carbofuran,imidacloprid,rotenone,thiamethoxam,and thiram.These formulations are still undergoing development and have been tested in vitro and in the field. Inorganic nanocarriers include natural and synthetic materials based on silica, clay, and metals such as silver, gold, titanium, iron, copper, and zinc. These nanocarriers are physiologically compatible, resistant to microbial degradation, and environmentally friendly, which makes them suitable for medical, veterinary, and agricultural applications. Even so, their use as nanocarriers has been somewhat overshadowed by their success in other medical applications.

In particular, metallic nanoparticles have been developed as theranostic and photothermal reagents, and for the treatment of iron deficiency. The first formulation approved by the FDA in 1974 was iron dextran for the treatment of iron deficiency. Eight more formulations have since been approved by the FDA or EMA . We do not consider these formulations as nanocarriers because the treatment modalities rely entirely on the nanoparticle itself without a cargo of active ingredients. However, metallic nanocarriers have recently been proposed in which the active ingredient is attached to the surface by physical absorption, electrostatic interactions, or conjugation.In particular, gold nanoparticles allow the conjugation of many biological ligands, including DNA and siRNA.Thus far, only one clinical trial has been carried out using metallic nanocarriers, namely spherical nucleic acid gold nanoparticles for the delivery of siRNA to patients with glioblastoma or gliosarcoma . More advanced metallic nanocarriers are under development, including particles that can respond to external triggers, such as light, magnetic fields, and hyperthermia to release their cargo in a controlled manner. For example, gold and silver nanoparticles have been conjugated to various cancer drugs.Mesoporous silica nanocarriers have been investigated extensively because they are stable particles with a high payload capacity due their porous structure, they have a tunable pore diameter , and surface modifications can impart new functionalities such as targeted delivery.MSNs have already been tested in the laboratory to deliver cancer drugs such as doxorubicin and camptothecin, antibiotics such as erythromycin and vancomycin, and anti-inflammatories such as ibuprofen and naproxen,tomato grow bag with remarkably high loading rates of up to 600 milligrams of cargo per gram of silica.This loading capacity of up to 60% far exceeds that of liposomal and polymeric nanocarriers. For example, the liposomal formulation Doxil and the polymeric formulation Eligard achieve loading capacities of 31% and 27%, respectively. However, some silica nanoparticle formulations have been shown to cause hemolysis due to strong interactions between silanol groups on the carrier and phospholipids in the erythrocyte plasma membrane.Another concern is their persistence in vivo due to the absence of renal clearance. These issues could be addressed by modifying the surface chemistry or applying coatings. In an agricultural context, silica is already highly abundant in soil and such particles could therefore be engineered for the controlled release of active ingredients without the carrier itself causing environmental harm. For example, MSNs have been used to deliver the insecticide chlorfenapyr over a period of 20 weeks, which doubled the insecticidal activity in field tests.The fungicide metalaxyl was also loaded into MSNs, allowing its slow release in soil and water for a period of 30 days.Similarly, nanocarriers based on naturally occurring aluminum silicates have been formed into phyllosilicate sheets for the intercalation of antibiotics and herbicides, allowing sustained delivery.Several metallic nanoparticles have demonstrated antimicrobial properties, and the EPA has already approved silver nanoparticles for use as an antimicrobial agent in clothing, but not yet for the delivery of active ingredients. Finally, carbon nanotubes are also being investigated for medical and agricultural uses because their shape and surface chemistry confer unique properties, although their toxicity remains a translational barrier. I recommend the following reviews for further information. Over the course of evolution, nature has yielded a variety of bio-materials with great structural complexity that remains difficult to emulate.

The analysis of such complexity requires the appropriate molecular methods, and for this reason the development of proteinaceous nanocarriers has lagged behind that of the simpler liposomal, polymeric, and micellar structures.The production of proteinaceous nanocarriers has also required the development of tools for the expression of recombinant proteins and strategies for creation or diversity, such as directed evolution, genome editing and synthetic biology. These tools have allowed the production of hierarchically organized proteinaceous structures, including albumin nanoparticles, heat shock protein cages, vault proteins, and ferritins.These comprise repeated protein sub-units forming highly organized nanostructures that are identical in size and chemical composition. Although synthetic nanoparticles can also be assembled into complex structures, the sophistication and monodispersity that can be achieved with proteins has yet to be replicated. Proteinaceous nanoparticles have been used as biocatalysts for the synthesis of novel materials, but are also useful for the delivery of active ingredients in medicine and agriculture.The first proteinaceous nanocarriers were developed to mimic the properties of plasma proteins, thus increasing circulation times and reducing systemic side effects. In 2005, the FDA approved the proteinaceous nanoshell Abraxane, consisting of albumin-bound paclitaxel for the treatment of breast cancer. The conjugation of paclitaxel to albumin stabilized the drug even in the absence of Kolliphor EL, and enhanced the uptake of the active ingredient compared to the Kolliphor EL formulation.Given the safety and efficacy of drugs conjugated to albumin, two other albumin nanocarriers are undergoing clinical trials . The first is an albumin conjugate of the protein kinase inhibitor rapamycin indicated for colorectal cancer, bladder cancer, glioblastoma, sarcoma, and myeloma.The second is an albumin conjugate of docetaxel indicated for the treatment of prostate cancer. Albumin has a long circulation half-life due to its interaction with the recycling Fc receptor. It is beneficial for the delivery of small molecules that are unstable or have low solubility in blood, as well as proteins and peptides that are rapidly cleared from the circulation. Small molecules can be chemically fused to albumin and administered as conjugate, and strategies to target small-molecule drug cargoes to albumin in vivo have also been developed.Heat shock protein cages, vault proteins, and ferritins have also been investigated for the delivery of active ingredients, although no clinical trials have been reported thus far. Heat shock proteins are chaperones that promote the folding of newly synthesized proteins and the refolding of denatured ones, which means they are naturally stable and possess channels and cavities for the sequestration of cargo.There are five families of heat shock proteins: Hsp100, Hsp90, Hsp70, and Hsp60 , and the small heat shock protein family, ranging in size from 12 to 43 kDa. Heat shock proteins assemble into large complexes that vary in size and shape , and they can be engineered to carry and deliver active ingredients such as doxorubicin.Vault nanoparticles are barrel-like ribonucleoproteins found in many eukaryotes. They are 41 x 73 nm in size and resemble the vault of a gothic cathedral. Their precise biological function remains unknown, although they are thought to play a role in nuclear transport, immunity and defense against toxins.Several proteins have been encapsulated in vault nanocarriers, including the lymphoid chemokines CCL19 and CCL21, the New York esophageal squamous cell carcinoma 1 antigen, the precursor of adenovirus protein VI , the major outer membrane protein of Chlamydia trachomatis, and the egg storage protein ovalbumin.Vault Pharma is one company specializing in the development of these structures. Finally, ferritin is an iron-storage protein with 24 subunits that self-assemble into a spherical cage structure 12 nm in diameter with a molecular mass of 450 kDa.

There is also geographic variation in lactase persistence phenotypes that complicate the pattern here

Although there is evidence for early and rapid domestication of pigs in the lower Yangtze , the adoption of domesticated animals commonly used in dairying did not occur until the late Holocene . The prevalence of lactase persistence phenotypes within China remains low today , although there are slightly higher frequencies in the North . The evidence for long-term dietary change within South Asia is particularly complex with considerable spatial and temporal variation. The earliest pottery and domestic rice are present by 9 kya, but evidence for significant sedentary villages and agricultural dependence occurs only after 4 kya following the mid-Holocene movement of crops from both the Western Eurasia and China . There is evidence for the independent domestication of cattle in the Indus region ca. 7 kya and convergent evolution for lactase persistence in South Asia, with the highest frequencies in the northwest parts of the region, but very low frequencies in southern and eastern areas of the Indian subcontinent . It is also notable that Indian pastoralists maintain greater stature than higher caste individuals, which has been attributed to milk consumption . The question of body size variation as a reflection of diet and health in the past has been of long-standing interest to bio-archaeologists. While documented declines in Neolithic estimated statures have been linked to lower predicted statures based on genetics , adult body size also reflects developmental plasticity and life history variation . Stature itself is not really a trait, but rather a consequence of growth, which ultimately reflects variation in strategies for energy allocation throughout development. Body mass likewise indicates investment in lean and fat tissue, although unlike stature, these can respond to ecological stresses through adult life. Improved growth is generically a good marker of health because many aspects of somatic maintenance benefit from better growth in early life, plastic plant pot whereas defense against pathogens and early reproduction reduces energy for linear growth and lean tissue deposition.

Applying a life history perspective to growth provides insights into the likely role of infectious disease and pathogens in reductions in stature in prehistory, as there are multiple routes to generate the adult phenotype which extend beyond diet but also include allocation of energy to immune function or reproduction, potentially mediated by fat deposition .A comparison of trends in stature across the past 10 kya in other regions is presented in Fig. 3. Southern Europe is characterized by a general decline between 10 and 6 kya, followed by relative stability through the mid-Holocene . In Central Europe, there is a marked and significant increase in male stature between 8 and 5 kya, and a general increase in female stature across the same time frame . Both males and females in Northern Europe are also characterized by a general increase in stature from 7 kya, with males peaking ca. 3 kya and females ca. 2 kya . Stature trends across the same time frame in the Nile are more variable and show no specific long term trends, while in China statures are generally consistent throughout the Holocene, except for a decline among females after 3 kya . A contrasting pattern is observed in South Asia where there is a significant decline in both male and female stature throughout the Holocene. A regional comparison of Holocene body mass trends illustrates a consistent pattern of initial decline in Southern Europe followed by a period of relative stability. In Central Europe, there is a general increase in male body mass between 4.5 and 2 kya, while female mass is relatively stable throughout the Holocene. In Northern Europe, there are early Holocene declines in both male and female body mass that reach their low point approximately 5 kya and are followed by increases by 2 kya. When these patterns are contrasted with other regions, we see relatively little change in the Nile Valley, while in South Asia male body masses increase in the first half of the Holocene in a period where female mass appears to decline. Here, estimated male masses fall considerably after 4 kya.

In China, there appears to be relative stability in body mass in the early part of the Holocene, followed by increases among males from 5 to 2 kya, and among females from 3 to 1 kya. Noting that the most significant long-term increases in stature occur in Central and Northern Europe where there is evidence for strong selection acting upon lactase persistence during the mid-Holocene, we consider specific trends in sub-regions of Northern Europe, Britain, southern Scandinavia, and the eastern Baltics, over the past 8,000 y . In Britain, there is relatively minor and non-significant temporal variation in stature through time, while male body mass generally increases from ca. 5 to 2 kya. In Scandinavia, there are marked and significant increases in male stature between 7 and 4 kya.Body masses in the region are consistent among early Holocene males but females show a decrease through the mid-Holocene. Both sexes show increases in body mass between 5 and 2 kya, but the trend is more pronounced among females. In the Baltics increases in stature are expressed in both males and females between 6 and 2 kya while body masses are relatively consistent throughout the Holocene. To investigate the spatiotemporal patterning of body size variation throughout Europe in greater detail, we generated heat maps of mean statures and body masses . The results demonstrate fairly uniformstature across Europe before 10 kya and a general decline between 10 to 6 kya, followed by increases that are most pronounced in Northern Europe and Southern Scandinavia. Body mass trends follow a broadly similar pattern, with much of Europe characterized by estimated body masses above 65 kg before 10 kya, followed by declines in much of Western Europe through to 6 kya. Increases in body mass are observed in Central Europe from 6 to 4 kya and across most of Northern Europe from 4 kya to the present. The period from 10 to 6 kya is predominantly before the transition to agriculture in central and northern regions but includes hunter-gatherers, farmers, and others with variable or transitional subsistence strategies, suggesting that further analyses with an expanded dataset are required to contextualize this trend.

In this study, we investigated long-term trends in human stature and body mass relative to late Pleistocene and Holocene cultural change in seven different regions. We analyzed data by chronological and geographical information rather than cultural labels, given the significant spatiotemporal and regional variation in cultural characteristics attributed to terms such as the Neolithic, opting instead to discuss the broader timescales upon which the process of the transition to domesticated plants and animals was enacted. The results demonstrated that in most regions body size decreased before the earliest manifestations of agriculture, regional patterns of phenotypic variation over time are variable, and this spatiotemporal variation in stature and body mass is not directly associated with the onset of the Neolithic. Given their timing, these trends cannot simply be explained by subsistence changes related to the reliance on domesticated plants and animals. We also noted recent phenotypic diversification that is most pronounced in the last 2,000years,nursery pots which requires further study but may stem from a combination of demographic expansion, genetic diversification, and socio-economic inequality. It is worth noting that the long-term trends in the Levant, where the earliest transition to agriculture was observed as a complex process over millennia , demonstrated relatively stable stature and body mass trends over time. The Levant is a region characterized by long-term population continuity and the in situ domestication of numerous species of indigenous plants and animals over an extended period of the terminal Pleistocene and early Holocene . The transition to agriculture in this region represented a long period of mixed hunting and gathering and cultivation of crops and domesticates that were well adapted to local environmental conditions. Similarly, there was no significant change in stature through time in China after plant domestication, and an increase in body mass among males during the later Holocene. This is a region that is also characterized by population continuity, local domesticates, a very long period of mixed foraging and farming rather than an abrupt agricultural transition, and high levels of environmental productivity. It is important to note that our approach to comparing population trends by region may confound local impacts of migrations and gene flow, such as the well documented increase in steppe ancestry among northern Europeans, which may have influenced north-south gradients in human stature , and similar population movements in other regions likely influenced the complexity and timing of cultural and phenotypic changes. In South Asia, for example, we noted long-term reductions in stature and body mass throughout the Holocene. The region, however, exhibits a high degree of ecological diversity and is characterized by the adoption of different domesticates that originated in East Asia, Western Asia, and Africa in different regions of the Indian Subcontinent.

Similarly, in the Nile Valley, another region characterized by the adoption of plant and animal domesticates from other regions, results are highly variable and likely confounded by the complexity of migration history in the region. At present, there are insufficient data to match aDNA evidence for ancestry with direct phenotypic measures on the broad scale presented in this paper. However, it is likely that underlying genetic variation and changes in the sociocultural environment, including diet, underpin phenotypic change. Further research will be required to clarify long-term spatiotemporal trends in phenotypic and genetic variation. We also aimed to test the LGH by determining whether the geographic and temporal timing of selection for LP phenotypes is associated with increases in stature and body mass. The most significant mid-Holocene increases in stature and body mass occurred in Northern Europe between 7 and 4 kya and these were preceded by increases in stature in Central Europe that occurred between ∼8 and 5 kya. These regions are linked in providing evidence for mid-Holocene selective sweeps in genetic variants associated with LP, providing preliminary support for the LGH. Within Northern Europe, modest increases in body mass were noted in Britain among males, but the most significant trends toward increased stature and body mass were found in the Baltic and southern Scandinavian regions. Heat map results demonstrate how the current patterns of stature and mass variation in Europe were established throughout the mid to late Holocene. While size increases were noted in regions where there is evidence of natural selection in response to dairying, we noted different trends among males and females with more significant increases in stature generally expressed among men and more significant variation in body mass among women. We suggest this is explained by greater plasticity among men, particularly in stature, in response to environmental and cultural fluctuations, while women’s phenotypic variation is better able to buffer environmental stress via sexual dimorphism in body mass that reflect lifelong differences in energetics and somatic investment . There is evidence that males show greater stunting in response to early life under nutrition , which would lead to greater variation in adult male statures. While skeletal methods of body mass estimation do not generally reflect late-life accrual of body mass , both lean mass and fat mass are components of maternal fitness, and substantial variability in these tissues emerges prior to reproduction, suggesting that body mass variation is more directly linked to female fitness than stature . Phenotypic plasticity may have also been expressed most strongly late in development, where IGF-I factors in dairy milk may have directly fueled growth differences and sexual dimorphism . In general, we note that while the timing of size increases corresponds with selective sweeps in lactase persistence , it is unclear whether phenotypic variation reflects underlying genetic variation or whether phenotypic plasticity precedes later genetic adaptation, but there is growing evidence that the latter is an important mechanism of adaptability . Overall, our results provide provisional evidence for greater phenotypic stability in regions of in situ domestication and where the transition to agriculture was gradual over millennia. The dispersal of farmers into novel environments where foreign domesticates may have struggled to establish appears to have led to greater phenotypic diversity in human populations.

More importantly the F-statistics demonstrate that the instruments have sufficient power

We find no evidence that the dams included in the sample are more or less likely to be used for irrigation purposes or to supply water to cities. However the excluded and included dams differ in terms of their height, the size of their reservoir, their capacity, and their average capacity lost due to sedimentation. These results suggest that our inclusion criteria is somewhat biased toward larger dams that can retain more water, but it is uncertain whether this is likely to introduce bias in our analysis in terms of dam performance and its impact on child nutritional status. In their influential paper Duflo and Pande use Indian districts as their unit of analysis and proceed to identify which areas are upstream and downstream from each other. However it is unclear whether one can apply this strategy in Africa. In particular, visual inspection of administrative regions in Africa reveals that the borders of many regions run at least partially along rivers, see for instance the case of the Southern African tip in Figure 1.3. As a consequence many regions contain both the catchment and the command area of a dam. Strobl and Strobl propose an arguably superior spatial breakdown in terms of upstream and downstream relationships that is based on actual river flow data. The U.S. Geological Survey Data Center has developed a geographical database, the HYDRO1K, providing a number of derivative products widely used for hydrological analysis. They use the drainage basin boundaries data from the HYDRO1K which divide the African continent into 7131 6-digit drainage basins with an average area of 4200 km2 . More critical and important for our analysis,plastic flower buckets wholesale the database assigns to each basin a code that allows one to determine whether it is upstream, downstream or not related to another basin.

Figure 1.4 depicts the spatial breakdown of the African continent according to our 6-digit basins. For comparison the figure depicts these jointly with the outline of the country borders. Basins vary greatly in shape and size, with a large number crossing national borders. Figure 1.5 depicts the 6-digit basins and the outline of administrative regions in the Southern African tip. The figure confirms that even at the sub-national level there is little correspondence between administrative regions and 6-digit basins. A similar picture emerges for the Southern Indian region where there is no obvious correspondence between administrative regions and 6-digit basins, see Figure 1.6. The main challenge for estimating the effect of dams on child nutrition is that dams are unlikely to be randomly allocated across regions, leading to a serious endogeneity problem . Moreover with a cross-section of 6-digit river basins, we are unable to control for invariant basin characteristics that influence dam location and are correlated with child nutrition, a strategy that would attenuate the endogeneity problem. In their study of Indian dams, Duflo and Pande use the share of dams in a state prior to their period of analysis interacted with a district’s suitability for dam construction based on the district’s river gradient to construct an estimation of the number of dams in each district. They then use these estimated number of dams as instruments for the actual number of dams in a district. In this paper we implement an instrumental variable strategy developed by Strobl and Strobl who modify Duflo and Pande’s approach along several dimension. Strobl and Strobl use the fact that starting with European colonization, a number of treaties were signed between African states to clarify the management of water resources.

Treaties, especially those signed in the colonial period, focused on the division of water resources or encouraged the construction of dams. For instance Lautze and Giordano note that about three quarters of the treaties cited as a goal the construction of dams for hydropower purposes and/or to expand the area of irrigated land. Strobl and Strobl use the fact that every country on the African continent has territory in at least one treaty basin. In the HYDRO1K data set, treaty basins correspond to 1-digit and 3-digit Pfaffstetter code classification and cover 60 per cent of Africa’s total land area. Starting with European colonization, a number of treaties were signed between African states to clarify the management of water resources. Treaties, especially those signed in the colonial period, focused on the division of water resources or encouraged the construction of dams. For instance Lautze and Giordano note that about three quarters of the treaties cited as a goal the construction of dams for hydro power purposes and/or to expand the area of irrigated land. To construct the relevant geographical delineation of the policies influencing dam construction, Strobl and Strobl use two databases. The first is the International Freshwater Treaties Database, which provide a comprehensive collection of international freshwater related agreements since 1820, including summaries of these as well as coding them according to the year signed and the river basins and countries involved. The second is the database on historical formation of treaty basin organizations in Africa compiled by Bakker. Combining these two databases reveals a total of 98 treaty basin organizations that were formed and which involve 53 countries and 59 river basins since 1884. Figure 1.7 depicts these treaty basins. As emphasized earlier the treaty basins are clearly transnational, cutting generally across several countries. Moreover their size, ranging from 1-digit to 3-digit Pfaffstetter code and their potential extent of coverage is at a substantially larger scale than the individual regions that we use as our unit of analysis, 6-digit Pfaffstetter code. In this paper we use this specific policy context and the approach in Duflo and Pande to develop an instrumental variable strategy to estimate the effect of dams on child nutrition.

As in Duflo and Pande we use the fact that a 6-digit basin’s suitability to dams should influence the number of dams built in the basin relative to other 6-digit basins in the same treaty basin. More specifically we interact a 6-digit basin’s river gradient and the proportion of dams in the treaty basin it falls into as an instrument for the number of dams in the 6-digit basin. As such we only rely on within treaty basin differences in suitability to dams to estimate the effect of dams on child nutrition. Moreover, in the African context, an important distinction needs to be made between perennial and ephemeral rivers, where the former’s flow is continuous, but for the latter water only flows for part of the year. Ephemeral rivers tend to be located in the dry lands of Africa and are much less suitable for dams, see Seely et al.. For instance, to intercept a large volume of water, a dam on an ephemeral river must be large in relation to the average inflows, but such dams are under high risk of failure because of the unpredictability of flash floods. Nevertheless because of the lack of sufficient perennial water sources,black flower buckets many countries rely at least in part on ephemeral rivers for dam location as well. For example, in Namibia only 10 per cent of the population rely on perennial rivers for their livelihood, and only 3 of the 19 major dams of the FAO database are located on those rivers. Treaty basin fixed effects ηb control for time-invariant characteristics that affect child nutrition, which are correlated with the likelihood of dam construction allowing us to only use within river basin and cross sub-basin variations for identification. However even in this situation, there might be unobservable determinants of child nutritional status that are correlated with the incidence of dam construction. In this case OLS estimates of the effect of dams will be biased. For instance if sub-basins where households are relatively richer are more likely to receive dams then the OLS estimate of β1 will be biased upward while the OLS estimate of β2 is likely to be biased downward. As in Duflo and Pande, we use the non-monotonic relationship between river gradient and the incidence of dam construction to implement an instrumental variable strategy. The approach consists in using exogenous variation in geographic features of different river basins to estimate the number of dams in a sub-basin. These estimated number of dams are then used to instrument for the actual number of dams. We construct measures of of a sub-basin geography such as elevation and river gradient using topographic information for multiple cells in each river basin. These information are used to compute the fraction of each sub-basin in different elevation categories and the fraction of a river basin falling into four gradient categories. Lastly to compute river gradient we restrict to cells in a sub-basin through which a river flows and compute the fraction of area in the above four gradient categories. Our panel on dam construction allows us to use all the information available to estimate the number of dams in a given sub-basin located in a river basin at certain points in time. Three sources of variation are used to predict the number of dams in a sub-basin: differences in dam construction across years in Africa, differences in the contribution of each each river basin to the increase in dams built, and differences across sub-basins driven by geographic suitability. First, we show that the river gradient matters for dam location. As a first step we regress the number of dams in 2000 on the fraction of river gradient in each gradient category by type of river, the average gradient in the 6-digit basin, river length by type of river, total area of the basin, and treaty basin fixed effects. We only show the coefficients on our main variables of interest.

The results of this analysis are reported in Table 1.3, columns and , and are consistent with Duflo and Pande’s finding for perennial rivers: moderate gradients in perennial rivers are more likely to be associated with dam construction. We also find that high gradients are less likely to be associated with dam construction. For ephemeral rivers we find that moderate and high gradients are less likely to receive dams. One possible explanation for this is that ephemeral rivers tend to require wider water flow for dam construction and tend to be less steep than perennial rivers. Moreover many of the dams with water supply purpose tend, in our data, to be located on low gradient ephemeral rivers. We also estimated the model in the sample of dams with irrigation as one of the major purpose and find qualitatively similar results. Overall these results provide support for using river gradients calculated separately for perennial and ephemeral rivers as predictors of dam construction. Next we report in columns and of Table 1.3 the estimated coefficient of RGrjks×Dbt from the first step regression in the pooled sample over all years. Column shows the results for all dams while column reports the results for dams with some irrigation purpose only. The results for perennial rivers are overall similar to the cross-sectional results. For ephemeral rivers we find that as the share of dams in the treaty basin increases, additional dams are less likely to be built in 6-digit river basins with very small river gradients . Table 1.4 presents estimates of the effect of dams on the nutritional status of children. Panel A provides Feasible Generalized Least Squares estimates, and panel B Feasible Optimal IV estimates. The coefficient on “own dam” captures the impact of dams built in that 6-digit river basin, while “upstream dam” measures the effect of dams in upstream 6-digit river basins. In this table each row corresponds to a separate regression; row 1 and 3 present estimates where the dependent variable is height-for-age z-score or an indicator equal to one if a child’s height-for-age z-score is below -2 points of standard deviation; while row 2 and 4 present estimates using weight-for-age z-score or an indicator equal to one if a child’s weight-for-age is below -2 points of standard deviation. The models in columns 4 to 6 and 10 to 12 are estimated using a linear probability mode. In columns 2 to 6, the analysis is restricted to dams with irrigation as one of their main purposes, while in columns 7 to 12 we include all dams.

Survey data show a very close relationship between information value from and trust in an organization

These results corroborate with previous studies demonstrating that ecological and moral concerns matter in farmer decision-making, and that motivations are not exclusively profit-driven . The later statement seems intuitive—growers would hope policymakers would include a diverse range of perspectives into their decisions, especially in light of growers’ sentiments on a lack of stakeholder participation during the updated waiver. Interestingly, one issue that more farmers agreed with in 2006, yet more respondents disagreed with in 2015 was that “management practice requirements of the Agricultural Waiver are fair to growers.” As described in Chapter 3, fairness was a hotly contested issue in the 2012 Agricultural Waiver negotiation process, spanning a number of equity issues from the types of BMPs required to the cost and unequal burdens of tiered mandates. This finding is another testament to farmers’ increasing frustration with the Ag Waiver process and mandates, as alluded to by the Farm Bureau. The final series of questions in the survey asked growers about their trust and communication with other groups and water quality agencies as well as the value of information they received from those organizations . In both years, environmental groups were the least trusted and had the least contact frequency, whereas other farmers were the most communicated with but not necessarily the most trusted. Results from a Pearson’s correlation test between information value and trust found a strong positive relationship between the two variables, the coefficients were close to a perfect positive relationship , only varying between 0.80 and 0.99. While data from this survey is not sufficient to test a causal relationship, for example, if the quality of information from a given agency influenced feelings of trust, however,procona valencia buckets these results do substantiate the institutional rational choice model’s belief that there is indeed a strong relationship between information and trust There also appeared to be a close positive relationship between the amount of communication, trust and information value associated with a given organization .

These results support the body of literature on the connection between trust and contact frequency. Interestingly, results show a few exceptions to this trend, just as they did in Lubell and Fulton’s study. Growers reported a dip in trust despite more communication in relationships with a few different organizations, all of which had regulatory roles, including the Regional Board and Preservation, Inc., and to a lesser extent, the County Agricultural Commissioners office. These cases could be examples of the “institutional distance” phenomenon , whereby regulators might have a higher frequency of contact with growers, but a physical distance prevents face-to-face communication and/or centralized decision making making the institutional distance greater. Another possible explanation for the dip in trust despite more communication could be due to different values and interests between growers and regulatory agencies, as described by the Advocacy Coalition Framework . These different interests could also help explain the low scores on trust for the other group that might be perceived as having very different view and interests than growers—environmental groups, which scored 3.6 out of 10 in 2006, and 2.8 in 2015. Despite these exceptions, a more in depth look at the association between trust and communication confirms a strong relstionship between the two variables for most non-regulatory agencies. The 2015 survey results show that there was a significant improvement in the amount of trust when a grower had contact with an organization compared to when it did not have any contact with that group . The only two exceptions to this trend were farmers’ relationships to the Regional Board and farmers’ relationships to other farmers. In both cases, trust did not significantly improve with contact, perhaps suggesting that the complex historical relationships with these two polarizing groups—the group regulating farms and the group most aligned with your values —overshadows factors such as contact frequency when measuring trust. To test the Farm Bureau’s observation of trust decreasing between the two Agricultural Waivers, mean trust in an agency were compared side by side for the two surveyed years and significance was tested in a two-tailed t-test . Results show that trust in the Regional Board decreased significantly between 2006 and 2015. Yet despite the significant decline, the mean trust scores for the Regional Board were relatively close between the two surveys .

Another group that experienced a significant decrease in trust over this time period was environmental groups . While the information from the survey is not comprehensive enough to verify a causal relationship between decreased trust and the two Ag Waivers, the significant decrease in trust over time does give credence to the Farm Bureau’s concern about growers’ declining relationship with the primary regulatory agency, the Regional Board. Interestingly, one group that might have been expected to gain trust from growers between the two surveys, but did not, was Preservation, Inc. Created in 2004, Preservation, Inc. was still little known during the first survey, but by the second survey, the agency was providing valuable services to the vast majority of growers. One possible explanation for the unchanging trust in the primary monitoring agency despite more communication was that their core values differed substantially, heavily swaying growers’ perception of the agency. Finally, a subset of responses from the third set of questions, opinions on water quality management practices, and a subset of responses related to trust from the fourth set of questions, were assessed for correlatation, with a particular attention to trust in the Regional Board. Findings suggest that trust in the Regional Board is associated with growers’ opinions on water quality management practices . Trust in the Regional Board was greater among growers who agreed or strongly agreed with statements related to the fairness, effectivness and success of water mangement practices mandated in the Ag Waiver. Trust in the Regional Board was lower among growers who disagreed with these statements. These last set of findings are intutive, given previous research on trust being a function of aligning core beliefs between two groups. As Lubell states “People will trust actors who they believe have very similar beliefs and interests to their own, and their trust will decline as the difference in policy-core beliefs increases.” Growers trusted the Regional Board more when they agreed or strongly agreed with the Regional Board’s decisions and opinions on water quality practices, and growers trust in the Regional Board declined when they disagreed or strongly disagreed with the BMP provisions implemented in the Ag Waiver. Interestingly, there is a stronger correlation between those growers that “agreed” with statements than than those growers that “strongly agreed, ” perhaps indicating a threshold or a range at which growers trust is correlated with beliefs.

Previous research shows that repeated, face-to-face communication is a promising tool to bolster trust between water quality agencies and growers, as well as to alter attitudes relating to water quality management practices. Prior studies also demonstrate that other factors, such as historical relationships, core values, and institutional distance can act as equally strong forces in influencing trust, undermining the significance and value of communication between policy stakeholders . Results from this study corroborate with this literature. Growers’ trust in the majority of regional agricultural and water quality groups were closely correlated with the amount of communication as well as the value of information they received from that group. However,procona buckets growers’ trust in a few agencies, all with regulatory arms, did not correlate with contact frequency or information value. This was true in 2006, but much more so in 2015, and this was particularly true of growers trust in the primary regulatory agency, the Regional Board. These findings suggest that growers’ frequency of contact with the Regional Board, which increased between 2006 and 2015, did not relate to trust in the regulatory agency, which decreased between 2006 and 2015. These results do not suggest, however, that communication with regulatory agencies altogether does not matter. Rather, communication could play an important role in trust-building relationships, as suggested by the literature, but more research is needed into the types of communication utilized by the Regional Board, how communication has changed over time and how it might influence relationships with the regulated group. Preliminary research from a document review, discussed below, demonstrates that communication patterns are becoming more institutionally distant and deserves more research attention. While contact frequency with the Regional Board was not correlated to trust, opinions of water quality practices were. As the last set of findings illustrate, in 2015 there was a positive relationship between growers’ trust in the Regional Board and their opinions on water quality managemnt deicisons. These results cannot confirm causation—that trust leads to a convergence of beliefs, or a convergence of beliefs leads to trust; however, prior studies suggest the later. To build trust when two rival political actors do not hold the same views is not a simple task, espcially because core beliefs can be culturally embedded or shapped by historical events. However, building trust between adversaries is not impossible and should begin by achieving agreement on, at very least, empircal issues with sound evidence. Leach and Sabatier offer a few ways to undertake this process: a “professional forum” exposing scientific evidence from competing coalitions mediated by a neautral facilitiator , starting negotiations with a period of “joint fact finding” and consensus-building on the basic dimensions of the various problems , and/or pursue empathy-building exercises such as field trips . Another aim of this study was to examine anecdotes from the Farm Bureau regarding declining trust and collaboration between farmers and the Regional Board over the course of the two Ag Waivers.

While encouraging accounts of a working, collaborative relationship between growers and the Regional Board during the first Agricultural Waiver are difficult to substantiate from the survey responses, results from this longitudinal study as well as further evidence from agriculture testimonies do confirm that what rapport remained after 2004 was markedly soured during the next round of negotiations. There was a significant drop in trust between the two Agricultural Waivers, and growers reported to be more frustrated by the policy process during the second Ag Waiver—the majority agreeing that regulations were “unfair” and “too tough” despite their perceived efforts in adopting water quality management practices and their desire to be involved in the policy process. These results are somewhat contrary to literature that assumes “trust ought to be correlated with the length, depth, and recency of past collaboration” ; since only eight years prior to the follow-up study, farmers and the Regional Board joined efforts to pen the first ever regulatory program for agricultural water quality in the Central Coast. Why did trust degrade over this time period? And what lessons might be learned for future Agricultural Waiver negotiations? One somewhat fatalistic explanation for the waning relationship between farmers and the Regional Board is that the decline was inevitable. Comfortable with the 2004 provisions that they had collaboratively designed, growers were frustrated by the idea of increasing mandates. Unavoidably, the 2004 Ag Waiver was going to be made tougher—scientists, the State, and the public demanded that the Regional Board act on the growing evidence that water quality was not improving. This first explanation has dismal implications for future Ag Waivers since it assumes that little could have been done to save a relationship that was fleeting and inevitably going to decline. A second, more plausible theory is that the approach the Regional Board staff took during the drafting of the second Ag Waiver, beyond simply increasing mandates, tainted relations. The first Agricultural Waiver took a softer, collaborative and educational approach, slowly easing the agricultural industry into water quality regulations. Whereas negotiations for the second Agricultural Waiver came out of the gates strong, proposing a very tough 2010 Draft Order that took a more centralized approach, categorizing farms into set tiers with coupled mandates, bringing individual monitoring into the fold for the first time and required certain blanket provisions for all farms. Several agricultural interests claimed the new regulatory program was “the most rigorous in the state” . Although the new waiver was significantly watered down by the time it passed in 2012 and was ratified by the State Board in 2013, the policy process leading up to the 2010 proposal greatly strained rapport, opening a rift between growers and the Regional Board that would be difficult to restore during that round of negotiations.

Theory and experience suggest that the most successful pollution prevention tools are performance-based

In the U.S. and Canada, point source dischargers must obtain permits to release emissions, whereas non-point source dischargers largely remain uninhibited by federal mandates . In these WQT programs, point sources trade with other point sources to avoid costly discharge reductions at their industrial facilities, and only a handful of non-point sources are involved on a voluntary basis . On the limited occasions that the agricultural industry does engage in trading, farm non-point sources almost always assume the roll of “sellers” in the program, rather than “buyers” . Under such circumstances, point source dischargers pay non-point sources to comply with water quality standards , creating a profit-making opportunity for agricultural pollution This lopsided relationship between point and non-point sources highlights another related problem: the absence of a fully capped trading system. Though trading schemes show promise in transitioning the regulatory framework from individual discharge limits to river basin management based on group controls, for the system to realize its full potential, all dischargers—point and non-point—must participate . A further complication, both in partially- and fully-capped WQT systems, is that of accounting for differences in emission loads between point and non-point sources. WQT programs utilize a trading ratio to calculate how many units of estimated non-point source loadings should be traded with a unit of point source loadings . Because of the uncertainty of non-point source loadings, trading ratios are almost always set at 2:1 or greater to create a margin of safety . In this scenario, point sources must purchase two units of estimated non-point reductions for every unit of excess emissions. Interestingly, a study on trading ratios found that political acceptability, rather than scientific information, determined ratio calculations . Despite the challenges,blueberry in pot several notable successes have demonstrated that enforced group caps, emission allocations, and water quality standards can be met.

For example, in 1995, farmers from the San Joaquin Valley, California implemented a tradable discharge permit system to enforce a regional cap on selenium discharges. The selenium program set a schedule of monthly and annual load limits, and imposed a penalty on violations of those limits . In Canada’s Ontario basin, a phosphorus trading program was established in which point sources purchase agricultural offsets rather than update their facilities . A third-party, South Nation Conservation, acts a facilitator, collecting funds from point sources and financing phosphorus-reducing agricultural projects. It is estimated that the program has prevented 11,843 kg of phosphorus from reaching waterways . Numerous other pilot trading projects show promise, but need a serious overhaul if they are to realize their full potential. One prominent example worth mentioning is the U.S.’s Chesapeake Bay Nutrient Trading program. In response to President Obama’s executive order to clean up the Chesapeake Bay, the largest estuary in North America, the six states contributing pollution to the Bay are in the national spotlight as they figure out how to achieve pollutant allocations. Currently, their plans to meet water quality requirements are falling short . Economic scholars contend that a nutrient trading plan could offer the most cost-effective means for complying with the looming TMDL. But, uncertainty about agricultural sources willingness to participate and what trading ratio is most appropriate as well as high transaction costs remain issues . The most traditional form of command-and-control regulation is performance standards. Though often presented as an alternative to market-based approaches, performance standards can complement a tax or emissions-trading system, and can also be used alongside positive incentive schemes. In an incentive approach, if pollution exceeds a standard then a financial penalty or charge might be triggered, whereas if a farmer is well within compliance, the farmer might receive a positive payoff for their efforts. Standards can also be used in trading through pollution allowances with enforceable requirements .

And in a mandate scenario, standards are compulsory, and may or may not be accompanied with other motivating devices .Performance standards have successfully reduced point source water pollution—E.U.’s IPPC Directive and U.S.’s NPDES program and pollution of other media . Unfortunately, the same suite of challenges—the use of proxies, costs of monitoring and modeling, and uncertainty of environmental outcomes—face performance standards within the context of non-point source abatement. These perceived obstacles have largely precluded the use of performance tools for agricultural NPS control . However, a growing body of literature expounds the benefits of using performance approaches for this industrial sector . Performance measures are used to encourage Best Management Practices . Using models to predict the level of BMP performance can provide powerful decision-making data to farmers, helping them make appropriate management decisions . Performance modeling is most effective when conducted at the field-scale. For example, the Performance-Based Environmental Policies for Agriculture initiative found that the implementation of BMPs, such as changing row directions or installing buffer strips, reduces the risk of pollution to varying degrees depending on several on-farm factors . Allowing farmers to exercise site-specific knowledge in an individualized context highlights an important, laudable feature of performance-based approaches: flexibility . Some suggests that practice-based tools, ones that mandate or incentivize the installation of certain BMPs, are not as cost-effective as their performance-based counterparts . This is largely due to the fact that performance-based instruments provide flexibility to choose the practices that will achieve water quality improvements at the lowest cost .In the case of agricultural water pollution, farmers are the predominant actors targeted for compliance. While logical, since farmers’ management practices influence the amount of pollution that reach nearby water bodies, however it is worth noting that other actors involved in the pollution process could be targeted for regulation.

For example, the control of pesticides has been managed by regulating the chemical manufacturer, imposing mandates or taxes on chemicals sold on the market . This type of tool could be highly effective in reducing the amount of pesticides or fertilizers produced, sold, bought, applied and discharged into water bodies, creating a ripple effect through the whole production stream. Targeting actors further “upstream” is illustrative of what Driesen and Sinden call a the “dirty input limit” or “DIL.” Manufacturing companies are only one of several points along the production stream where the DIL approach could be effective; alternatively, pollutants could be controlled at the point of application. As suggested by the authors, the DIL approach is useful beyond the tool choice framework in that it provokes a new way of thinking about environmental regulation. Among the least invasive , but most important instruments for successful NPS management, capacity tools provide information and/or other resources to help farmers make decisions to achieve societal and environmental goals. Capacity tools are typically associated with voluntary initiatives rather than mandates . Because it can be difficult for farmers to detect the water quality impacts of their practices visually ,plastic planters wholesale learning and capacity tools become an invaluable means of conveying information to farmers. Farmers’ perceptions of the water quality problem and their role in contributing to pollution are one of the most influential factors in changing farming management practices . In California, the Resource Conservation Districts, University of California Extension, and the University of California’s Division of Agriculture and Natural Resources are examples of local government agencies providing capacity building services that include knowledge, skills, training and information in order to change on-farm behavior. In summary, each policy tool possesses strengths and weaknesses, which need to be taken into consideration when developing more effective ways to control agricultural pollution. An integrated approach, one that utilizes a diversity of policy instruments to address water quality issues in agriculture, is required. River basin management plans , or the “watershed approach” as it is often referred to in the U.S., can more appropriately tailor their choice of policy tools to local conditions. Authority has been granted to achieve water quality objectives at the regional jurisdictional level. The success of these programs will largely depend on the wisdom and will of those regional governmental leaders , as discussed below.What are the major similarities and distinctions between different approaches to agricultural non-point source pollution regulation available in the U.S. and Europe? And, which are most effective? This chapter examined the defining characteristics and application of six policy tools, each of which have been proposed for agricultural pollution abatement. As noted in the introduction, the task of comparing tools is complicated by the multiple facets and dimensions embedded in each tool . While research suggests that a mix of policy tools will outperform any one instrument , clear strengths, weaknesses and unique traits distinguish tools from one another and should be taken into consideration when regulators choose means to meet environmental goals. Table 2-1 lists several categories by which to compare a select group of policy tools. As the table illustrates, a number of key relationships are particularly important. Emphasis is placed on the difference between tools tied to emissions and those not tied to emissions. The clear benefit of tools tied to emissions is their ability to track and measure environmental improvements. However, therein lies these tools’ biggest weakness: Reliance on proxies to predict the extent of environmental improvements.

The information burdens needed to construct models that adequately predict the impact of a farm’s discharges are so great that many practitioners and scholars have shrugged off the task as impossible. Encouragingly, a growing body of literature and scholarly discussions show prospect for improved computer simulation efforts. Until more robust models are designed with improved information, policymakers will continue to rely on the second category of tools—those not tied to emissions. Tools untethered to specific pollution targets work by encouraging water quality improvements through incentives, contracts and/or information. These tools tend to be more politically favorable, but less effective by themselves, save one—the dirty input limit. While capacity tools can provide important information to farmers and best management practices may improve water quality, the DIL can prevent pollutants from ever reaching rivers and lakes, or even farms. With the U.S. pesticide and storm water regulatory programs as models , regulating inputs has the potential to achieve more than regulating emissions. But the DIL is not without obstacles, including heavy reliance on scarce information to set the appropriate limitations and political will to restrict chemical or fertilizer production and/or use. Non-point source pollution, or pollution that comes from many diffuse sources, continues to contaminate California’s waters . Agricultural non-point source pollution is the primary source of pollution in the state: Agriculture has impaired approximately 9,493 miles of streams and rivers and 513,130 acres of lakes on the 303 list of waterbodies statewide . The 303 list is a section of the Clean Water Act mandating states and regions to review and report waterbodies and pollutants that exceed protective water quality standards. Agricultural pollution in California’s Central Coast has detrimentally affected aquatic life, including endemic fish populations and sea otters, the health of streams, and human sources of drinking water . Despite the growing evidence of agriculture’s considerable contribution to water pollution, the agricultural industry has, in effect, been exempt from paying for its pollution, and more importantly, has failed to meet water quality standards. How to best manage and regulate non-point source agricultural water pollution remains a primary concern for policymakers and agricultural operators alike. This case study focuses on the Conditional Agricultural Waiver in California’s Central Coast, the primary water pollution control policy in one of the highest valued agricultural areas in the U.S. The Central Coast Regional Water Quality Control Board is under increasing pressure to improve water quality within its jurisdiction, especially with the added onus from a 2015 Superior Court ruling that directed the Regional Board to implement more stringent control measures for agricultural water pollution. Pressure on the Regional Board is exacerbated by regulatory budget constraints, interest groups, and by unanticipated events. Given these pressures, choosing appropriate criteria by which to evaluate the success of California’s primary agricultural water quality policies is complicated, but of critical importance. This policy analysis explores the complex process of negotiations, agendas and conditions at the heart of policy-making, highlighting areas where the 2004 and 2012 Ag Waiver has succeeded in achieving its goals, as well as where it has fallen short. The analysis is divided into two parts.