Tag Archives: horticulture

The RCT approach therefore enables tailored design of oxidation and hydrogenation catalysts

It should be noted that results of experiments on young plants, which may be highly susceptible to drought and drought-related mortality due to limited carbon reserves, may not scale directly to large, mature individuals in the field . This study showed high mortality in 2-year-old A. glauca exposed to a fungal pathogen with and without drought, in contrast with field observations of diseased, large adults exhibit severe canopy dieback and are ridden with fungal cankers, yet still survive . Previous studies have yielded similar results: for instance, photosynthesis was shown to be greatly reduced in oak seedlings compared to adults in drought years compared to wet years , and He et al. reported that responses of red maple and paper birch saplings to a 1995 drought were significantly different than those of mature adults. Similarly, since hosts are often able to allocate carbon reserves to compartmentalize canker-causing agents like N. australe within carbon-rich barriers , larger individuals with more biomass and greater carbon stores are able to utilize and direct more resources to defense than younger, smaller individuals. Thus, mature plants can better persist through biotic attack during environmental stress than their younger counterparts and experience various levels of canopy dieback rather than full mortality. Arctostaphylos glauca are obligate seeders, meaning they are killed by fire and must maintain populations by individuals recruiting from seed rather thanresprouting from their base. Therefore, young, small individuals may be of greatest concern for future populations of this species. Because current research is predicting more frequent and extreme drought events , more exotic pathogens , and more frequent fire in these southern California shrublands , hydroponic nft channel populations of A. glauca could decline because small individuals may be highly susceptible to disease and mortality.

A valuable next step for understanding these risks and predicting future shifts in vulnerable chaparral communities would be to monitor young recruiting populations of A. glauca for N. australe for signs of stress, infection, and mortality in the wild.In the face of rapid climate change, it is increasingly important to understand the abiotic and biotic mechanisms driving ecological landscape change. Large plant dieback events can produce major ecological consequences, including changes in vegetation cover , increased fire risk , and changes in hydrology , all of which affect ecosystem structure and functioning . Furthermore, the loss of even a few species can trigger effects on the local food web structure , and increase risk of invasion . The results of this study suggest that small individuals of A. glauca, one of the most common and widespread species the southern California chaparral community, are at high risk of disease and dieback due to opportunistic pathogens and extreme drought. The potential for dieback of Arctostaphylos spp., which provide food for animals such as mice, rabbits, and coyotes and are an important component of post-fire woody regeneration in chaparral, raises concerns regarding changes to ecosystem structure and functioning in the coming decades. Many ecosystems today are facing unprecedented drought ; yet, the interactions of drought and pathogens in wildland settings are difficult to study because the multitude of confounding variables and the challenges of manipulating both the pathogens themselves and climate. Thus, greenhouse studies such as this one are increasingly essential to understand the influences of drought and pathogens as they relate to dieback events, as well as to understand the relationship between stress and shrub/tree ontogeny . Critical questions remain regarding the relative tipping points for large-scale dieback among historically drought-tolerant species such as A. glauca that today are facing the combination of extreme drought and novel pathogens.

These pathogens may not express themselves until there is drought, highlighting the need for broader field surveys and long-term monitoring of wildland ecosystems. An important step to understanding the role of disease in contributing to vegetation change is also to isolate pathogens and test their pathogenicity under varying controlled conditions. This study provides one such step for what appears to now be a widespread, opportunistic introduced pathogen in an important native California chaparral shrub.Extreme drought events from climate change have produced immediate and dramatic effects in recent years, with costs often exceeding $1 billion due to their widespread economic and ecological impacts . Among the ecological consequences is widespread tree mortality, event within plant systems that have historically been considered drought-tolerant . While seasonal droughts are known to be a natural and regular occurrence in arid and semi-arid regions, the increased frequency, duration, and intensity with which they have occurred in recent years is highly unusual . Such extreme droughts, referred to as “global-change type drought” , are predicted to continue, and even become the norm, as a result of human-induced climate change . Consequently, species that are typically capable of withstanding regular drought stress may be susceptible to canopy dieback, and mortality, as a result of shifts in drought regimes . One such plant community that may be vulnerable to extreme climatic change is chaparral. Chaparral shrublands, which occupy approximately 7 million acres throughout California , are a dominant vegetation community in southern California, composed primarily of evergreen, drought tolerant shrubs and subshrub species including manzanita , ceanothus , and chamise . These species are well adapted to the seasonal variations intemperature and precipitation typical of mediterranean climates where hot, rainless summers are the norm . However, mediterranean-type regions like southern California are predicted to experience rapid increases in temperature , and increased drought occurrence and severity ; IPCC, 2013, resulting from human-caused climate change. These regions have thus been designated as worldwide global change “hot spots” .

Indeed, recent studies have reported extensive mortality of chaparral shrub species resulting from global-change type drought throughout southern California . Thus, climate change represents a significant threat to native plant community persistence in this region. A critical topic for ecological research is understanding where, how, and to what extent plant communities will change as a result of increased drought . Studies aimed at understanding the physiological mechanisms behind drought-related plant mortality – and why some plants suffer mortality from drought while others survive – have elucidated a variety of complex mechanisms of plant mortality . These include loss of hydraulic conductance , exhausted carbon reserves , and susceptibility to pests and pathogens due to being in a weakened state from drought . Measuring xylem pressure potential can be a useful index of soil water availability , and dark-adapted fluorescence can be a quick and accurate indicator of plant stress, as values drop significantly in water-stressed plants,. Together, these may be useful tools for predicting plant vulnerabilities to drought and biotic invasion. Landscape variables such as elevation, slope, and aspect have also been shown to correlate with plant water stress and mortality , and can be useful for predicting vulnerabilities during drought. However, major knowledge gaps still remain, and studies combining field mortality patterns with physiological data on plant water stress are rare . Plants employ a variety of complex strategies to cope with drought stress, but generally fall along a continuum of “drought avoiders” or “drought tolerators”. Drought avoidance, also known as “isohydry”, refers to plants that regulate stomatal conductance to maintain high minimum water potentials as soil dries out . While this strategy reduces the risk of xylem cavitation and subsequent hydraulic failure, it may increase the likelihood of carbon starvation, as C assimilation is greatly reduced . Conversely, drought tolerant plants maintain higher Gs, even at very low water potentials, which allows for continued C assimilation but with greater risk of xylem cavitation . These different strategies can have significant implications for ecosystem level consequences of severe drought ; indeed, nft growing system recent studies have linked anisohydry with greater levels of mortality in chaparral systems . An historic drought in southern California provided an opportunity to simultaneously measure physiological stress and dieback severity along an elevational gradient in aclassically drought-tolerant evergreen chaparral shrub, big berry manzanita . A. glauca is one of the largest and most widely-spread members in a genus consisting of nearly 100 species. Its range extends as far north as the Cascade mountains and south into Baja California, though it is most dominant in southern California shrublands . They frequently occur on exposed ridges and rocky outcroppings. In the chaparral shrublands of Santa Barbara County, it occurs from elevations of about 500- 1200m. A. glauca are obligate seeders, and must recruit from the seedbank following fire .

Compared to resprouters, which regenerate from a carbohydrate-rich burl at their base following fire, seeders tend to be fairly shallowrooted , and are thus less able to access deep water sources . Seeders are generally considered to be more tolerant of seasonal drought than resprouters , possibly a mechanism for shallow rooted seedlings to survive summer drought in an open post-fire environment following germination . However, this strategy has also been linked to higher mortality during extreme drought . A. glauca are also known to exhibit anisohydric mechanisms of drought tolerance , and can exhibit extremely low water potentials and high resistance to cavitation during seasonal drought . In 2014, we observed sudden and dramatic dieback in A. glauca in the Santa Ynez mountain range of Santa Barbara, California during an historic drought . The drought that lasted from 2012 to 2018 in southern California was themost severe to hit the region in 1,200 years , with 2014 being the driest year on record . Preliminary field observations indicated greater levels of canopy dieback at lower elevation stands compared to higher elevations. Dieback also seemed to be more prevalent on exposed and southwest-facing slopes, which in this region experience direct sunlight for most of the day. Other studies have reported significant Arctostaphylos spp. dieback and even mortality during periods of extreme drought stress, further suggesting species in this genus are vulnerable to drought-related mortality. Additionally, we observed widespread symptoms of fungal infection – including branch cankers and brown/black leaf discoloration – later identified as members of the opportunistic Botryosphaeriaceae family , suggesting multiple factors may be driving canopy dieback in this species. Drought-related mortality has previously been associated with opportunistic fungal pathogens in A. glauca and other chaparral shrubs , yet few studies have sought to understand the relative levels of drought stress incurred by plants infected with these pathogens, or how stress is related to canopy dieback and/or mortality. A. glauca shrubs are important members of the chaparral ecosystem, providing habitat and food for wildlife through their nectar and berries . Their structure and fire-induced germination strategies also make them significant components of the chaparral fire regime and post-fire successional trajectories . Large-scale mortality of this species could reduce resource availability for wildlife, as well as alter fuel composition and structure in the region, resulting in an increased risk of more intense, faster burning fires. Therefore, the potential continued dieback of A. glauca is of great concern for both ecosystem functioning and human populations alike. Yet because of the heterogeneity of landscapes in this rugged region, it is possible that portions of the landscape will act as refugia for drought-susceptible species. We hypothesized that A. glauca dieback severity is associated with areas of increased water stress across the landscape. To better understand the patterns and trajectory of A. glauca stress and dieback across a topographically diverse region of coastal California, we asked the following specific questions: How severe is drought-related stress and dieback in this region? How do plant stress and dieback severity vary with elevation and aspect across the landscape? How does dieback change across the landscape as a multi-year drought progresses? We chose xylem pressure potential as an indicator of plant water availability, and measured dark-adapted fluorescence and net photosynthesis as proxies for drought-related plant stress and physiological function. To address Question 1, we conducted an initial survey measuring general levels of canopy dieback, shrub water availability, and stress in the region. To address Questions 2 and 3, we conducted a more in-depth study of how shrub water relations and dieback vary with aspect and along an elevational gradient, and tracked changes in dieback severity for the four final years of the seven-year drought. We expected to find areas of low XPP correlated with greater physiological stress responses, and more severe dieback in lower elevation sites and on southwest aspects.

Overcoming potential limitations regarding conveyance from source to recharge areas is essential

The area selected should be readily accessible to farm equipment for site preparation and maintenance. The site should not disrupt normal farming operations or be in an area that could be easily overlooked and accidentally disked or sprayed. In addition, the site should be well drained to prevent ponding of water or plant die back.To prepare the area selected for a vegetated ditch, disk and shape the land to carry water and prepare a normal seedbed. Grasses should be planted in the fall when establishment is favored by cool weather and subsequent winter rains. After the seedbed is prepared, allow the winter rains to bring up the first flush of winter weeds. These should be either sprayed with Roundup or disked. The grasses should then be direct-seeded with a grain drill at 15 pounds per acre by late fall. They can also be broadcast at 20 to 25 pounds per acre and incorporated with a chain harrow followed by rolling. Buctril , MCPA , or 2,4-D can be used to control broad leaf weeds once the grasses have established and have been allowed to grow at least 3 to 4 inches tall to avoid injury to newly emerged seedlings. Be sure to contact the local agricultural commissioner for restrictions on the use of herbicides. For example, the phenoxys MCPA and 2,4-D cannot be used after March 1 in many counties. Once the grasses are established, they will compete well with weeds, requiring only occasional use of herbicides, hand weeding, or mowing.Since most of the sedimentation or particle retention occurs at the beginning of the filter strip, this area should be closely monitored, round pot for plants and excess sediment should be removed to keep water from diverting to new and easier drainage routes or channels. This may involve reestablishing the grasses by over seeding the area to ensure that a sheetlike flow is maintained as the water comes off farm fields.

Gophers and ground squirrels should be controlled and repairs made where channelization of water occurs. Irrigation runoff should supply the water needs of the vegetation in the ditches. Grasses may need to be mowed occasionally to prevent thatch from building up and to deter weeds. If the vegetated drain is grazed, the animals should be watched to prevent overgrazing and stand loss, especially on wet soils. Plant tissue testing may also be needed to ensure that nutrients concentrated in the filter strips have not built up to unhealthy levels for the animals.One method promoted for improving surface water quality runoff from furrow-irrigated agricultural fields is to apply a polyacrylamide to the irrigation water. PAM stabilizes the soil to minimize erosion and promotes the settling of suspended particles. PAM comes in tablet, granular, and liquid formulations. By itself, PAM is not toxic to aquatic life; however, the carriers in oil-based PAM can be toxic to aquatic life at recommended field application rates. For this reason, water-based formulations are recommended . In research trials conducted by the authors at the University of California, Davis, liquid PAM in a loam soil significantly reduced suspended sediment concentrations compared with a control of untreated water in surface runoff at PAM concentrations of 2.1 ppm and 7 ppm in the source irrigation water . Similar behavior occurred in a clay loam soil at a second field site at California State University, Chico, with a PAM concentration of 1.1 ppm. Terminating the liquid PAM injection once the water reaches the end of the furrows can be as effective as continuous PAM dosing, but this effect may depend on soil texture.Studies on tablet and granular PAM at Davis and Chico showed a similar response to the liquid PAM, with significant reductions in suspended sediment concentrations compared to untreated water . However, proper placement of dry PAM in the furrows was critical for efficacy. In studies in Idaho conducted by the USDA, dry PAM placed at the head of the furrow was effective.

However, at Davis and Chico similar placement of dry PAM at the furrow head resulted in the material being quickly covered by eroded sediment during irrigation, and the PAM lost its efficacy. In contrast, dry PAM material placed 100 to 300 feet down the field was not covered by sediment and was effective in reducing the sediment concentrations. Proper placement of dry PAM is particularly important for gated pipe systems, where water discharged from gates may cause considerable erosion at the head of the furrow. One way to lesson this erosion is to place irrigation socks over the gates. PAM applications had no effect on irrigation water infiltration rates for the soil types evaluated in the Sacramento Valley, whereas infiltration increased with the addition of PAM in an Idaho study .The cost of applying PAM depends on how it is applied to the field. The cost of dry PAM formulations placed in the furrows depends on the material cost, the furrow spacing, and the number of tablets per furrow. PAM application rates are based on recommended rates for each type of PAM material . The smaller the row spacing , the larger the cost will be for a given acreage. Whether to apply dry PAM directly into the irrigation water or use liquid PAM depends on the target PAM concentration in the irrigation water, the material cost, the flow rate of water into the field, and the injection time. Table 4 shows cost comparisons using different rates and formulations of PAM on an 80-acre furrow-irrigated row crop planted on 5-foot beds using data provided by a grower. Costs per acre are based on the total field acreage . In this field example, the time for the water to reach the end of 1,200-foot furrows is 12 hours; there are four irrigation sets ; a flow rate of 1,320 gallons per minute; and a furrow flow rate of 11 gallons per minute.

The lowest cost occurred for granules placed in the furrow, while the highest cost was for using liquid PAM. The high cost of liquid PAM reflected the cost of the material and the long injection time. Terminating the injection before complete advance to the end of the furrow would reduce the cost per acre but may increase sediment levels. While the cost per acre of applying liquid PAM in irrigation water is higher than the cost of dry PAM formulations, especially at a concentration of 5 ppm , our studies at Chico and Davis showed PAM concentrations of 1 to 2 ppm in the irrigation water to be effective in reducing the sediment load on loam and clay loam soils. As a result, growers should experiment with liquid PAM application rates to determine what works best on their farms, since the efficacy depends on sediment loads as affected by factors such as soil type and irrigation flow rates. The differences in field responses to PAM may be why the NRCS recommends a higher concentration of 10 ppm in irrigation water to reduce sediment loads in surface irrigation runoff; this rate should cover most sediment loads, but it would not be economical.Groundwater is an important water supply for more than two billion people around the world . It also provides more than 40% of the irrigation supply for global agricultural production on approximately 500 million ha of cropland . Given such intense use, it is not surprising that depletion of the resource is occurring in many parts of the world including the United States and California . Excessive groundwater extraction can decrease water levels, reduce surface-water flows, cause seawater intrusion, spread contaminants, and cause land subsidence . Sustainable resource management requires a combination of reduced extraction and increased recharge . Some reduced extraction may occur by increasing water use efficiency ; however, large round pot pronounced rates of extraction in many areas will likely necessitate modifying cropping patterns and fallowing cropland to address problems from over-pumping . Such changes will cause economic distress and likely bring political resistance. While avoiding strong measures to correct groundwater budget imbalances may not be possible, disruption might be reduced by increasing recharge where possible. Elements for successful artificial recharge projects have been reviewed in detail and may be programmatic or site-specific. Programmatic elements include sourcing, conveyance and placement of recharge water. Sources of recharge water may include urban storm water runoff and recycled water as wellas, notwithstanding water rights and permitting considerations , stormflows from streams and releases from reoperated surface-water reservoirs. Considerations include access to either existing canals and ditches, or the land required to construct these structures, as well as routing and capacity specifications. Options for placing water in recharge facilities range from constructing dedicated basins to repurposing existing gravel pits.

The recharge water could also be released to lands primarily used for other purposes but available on a seasonal basis such as sandy-bottomed drainage features, unlined canals and ditches, or croplands. Site-specific details include: location relative to conveyance and favorable hydrogeology, topography of the ground surface and presence of existing berms, type of irrigation technology present, timing of site availability relative to water available for recharge and cost to use the land under purchase, rent or option arrangements. Site-specific details regarding favorable hydrogeology directly relate to characteristics of the groundwater basin under consideration. Spatial variability of infiltration capacity is heavily influenced by the hydraulic conductivities of the soil and shallow geology as well as interconnectedness of higher hydraulic conductivity deposits at depth . Groundwater storage space is determined by the unsaturated zone thickness and its variations across the basin. The fate of recharged water over time relative to the recharge location can also be important . Recharge at some locations may offset local pumping and increase groundwater storage. At other locations, water entering the subsurface can quickly discharge from the groundwater system to surface water or flow across basin boundaries that are based on governance rather than physical characteristics. Data on the performance of managed aquifer recharge on croplands is limited and largely focuses on California and western USA. Dokoozlian et al. conducted a four-year pilot study flooding vineyards in the San Joaquin Valley of California during seasonal grapevine dormancy, observed no impact on crop yield, and concluded that the approach was viable for MAR. Bachand et al. performed a single-season pilot study for on-farm flood flow capture and recharge, also in the San Joaquin Valley, with both perennial and annual crops. They observed no impacts to crop yield and estimated the unit cost for the on-farm recharge as ~3–30 times cheaper than surface-water storage or dedicated recharge basins. Dahlke et al. investigated effects of winter flooding on established alfalfa fields at two locations in the Sacramento Valley of California and found that significant amounts of water could be applied without decreasing crop yield. Additional unpublished studies indicate that almonds may tolerate at least 2 ft of cumulative applied recharge water in a season without detrimental effects and some grapes have shown little to no productivity decline after more than 20 ft of recharge in one season . Some analysis on scaling up on-farm recharge for larger scale groundwater management has also occurred. Harter and Dahlke discussed the potential for on-farm recharge projects to improve conditions in California where groundwater has been stressed by overuse and drought. O’Geen et al. considered requirements for successful projects and presented a spatially explicit soil-agricultural-ground water banking index for recharge project suitability on agricultural lands in California. Niswonger et al. examined potential benefits from on-farm MAR for a hypothetical groundwater sub-basin in the semi-arid western USA. They developed an integrated surface-water diversion and subsurface flow model to simulate recharge operations and benefits to the groundwater system over a 24-year period. Scenarios considered recharge water from snowmelt in excess of water rights during wet years applied to croplands during two winter months each year. Among other points, the work concluded that increases in groundwater storage from AgMAR operations were spatially related to variations in groundwater depth and withdrawals across a basin as well as proximity to natural discharge areas and supported greater pumping supplies for agriculture. This work addresses planning-level analysis of Ag-MAR using water from reservoir reoperation for periodic flooding of croplands during winter months.

Product-dependent costs and pricing are common to all products regardless of platform

An additional advantage of this strategy is that exogenous ACE2 would compensate for lower ACE2 levels in the lungs during infection, thereby contributing to the treatment of acute respiratory distress. Several companies in the United States and the EU have developed recombinant ACE2 and ACE2-Fc fusion proteins for preclinical and clinical testing, although all these products are currently produced in mammalian cell lines . The impact of plant-specific complex glycans on the ability of ACE2-Fc to bind the RBD has been studied using molecular dynamic simulations and illustrates the important role that glycosylation may play in the interaction between the S protein and ACE2 .Griffithsin is a lectin that binds high-mannose glycans, and is currently undergoing clinical development as an antiviral against HIV-1. However, it also binds many other viruses that are pathogenic in humans, including HSV , HCV , Nipah virus , Ebola virus, and coronaviruses including SARS-CoV and MERS , and as recently determined, also SARSCoV-2. A clinical product in development by University of Louisville is currently manufactured in N. benthamiana by Kentucky Bioprocessing using a TMV vector. The API is also undergoing preclinical development as a nasal spray for use as a non-vaccine prophylactic against coronaviruses, with clinical evaluation planned for 2020 . This candidate PMP antiviral could be deployed under the EUA pathway if found effective in controlled clinical studies. Griffithsin is an interesting example of a product that is ideally matched to plant-based manufacturing because it is naturally produced by a marine alga. Griffithsin has been expressed with limited success in E. coli and tobacco chloroplasts, black plastic garden pots but better results have been achieved by transient expression in N. benthamiana using A. tumefaciens infiltration or TMV vectors, with expression levels of up to 1 g kg−1 fresh mass and recoveries of up to 90% .

A TEA model of griffithsin manufactured in plants at initial commercial launch volumes for use in HIV microbicides revealed that process was readily scalable and could provide the needed market volumes of the lectin within an acceptable range of costs, even for cost-constrained markets . The manufacturing process was also assessed for environmental, health, and safety impact and found to have a highly favorable environmental output index with negligible risks to health and safety.In addition to COVID-19 PCR tests, which detect the presence of SARS-CoV-2 RNA, there is a critical need for protein-based diagnostic reagents that test for the presence of viral proteins and thus report a current infection, as well as serological testing for SARS-CoV-2 antibodies that would indicate prior exposure, recovery, and possibly protection from subsequent infection. The most common formats for these tests are the ELISA and lateral flow assay. The design and quality of the binding reagents , along with other test conditions such as sample quality, play a key role in establishing the test specificity and selectivity, which determine the proportion of false positive and false negative results. Although the recombinant protein mass needed for diagnostic testing is relatively small , the number of tests needed for the global population is massive, given that many individuals will need multiple and/or frequent tests. For example, 8 billion tests would require a total of ~2.5 kg purified recombinant protein, which is not an insurmountable target. However, although the production of soluble trimeric full-length S protein by transient transfection in HEK293 cells has been improved by process optimization, current titers are only ~5 mg L−1 after 92 h . Given a theoretical recovery of 50% during purification, a fermentation volume of 1,000 m3 would be required to meet the demand for 2.5 kg of this product. Furthermore, to our knowledge, the transient transfection of mammalian cells has only been scaled up to ~0.1 m3 .

The transient expression of such protein-based diagnostic reagents in plants could increase productivity while offering lower costs and more flexibility to meet fluctuating demands or the need for variant products. Furthermore, diagnostic reagents can include purification tags with no safety restrictions, and quality criteria are less stringent compared to an injectable vaccine or therapeutic. Several companies have risen to the challenge of producing such reagents in plants, including Diamante , Leaf Expression Systems , and a collaborative venture between PlantForm, Cape Bio Pharms, Inno-3B, and Microbix.Resilience is the state of preparedness of a system, defining its ability to withstand unexpected, disastrous events , and to preserve critical functionality while responding quickly so that normal functionality can be restored . The concept was popularized by the 2011 Fukushima nuclear accident but received little attention in the pharmaceutical sector until COVID-19. Of the 277 publications retrieved from the National Library of Medicine22 on July 9th 2020 using the search terms “resilience” and “pandemic,” 82 were evenly distributed between 2002 and 2019 and 195 were published between January and July 2020. Resilience can be analyzed by defining up to five stages of a resilient system under stress, namely prevent , prepare, protect, respond, and recover . Here, prevent includes all measures to avoid the problem all together. In the context of COVID-19, this may have involved the banning of bush meat from markets in densely populated areas . The prepare stage summarizes activities that build capacities to protect a system and pre-empt a disruptive event. In a pandemic scenario, this can include stockpiling personal protective equipment but also ensuring the availability of rapid-response bio-pharmaceutical manufacturing capacity. The protect and respond stages involve measures that limit the loss of system functionality and minimize the time until it starts to recover, respectively. In terms of a disease outbreak, the former can consist of quarantining infected persons, especially in the healthcare sector, to avoid super-spreaders and maintain healthcare system operability .

The response measures may include passive strategies such as the adjustment of legislation, including social distancing and public testing regimes, or active steps such as the development of vaccines and therapeutics . Finally, the recover phase is characterized by regained functionality, for example by reducing the protect and response measures that limit system functionality, such as production lockdown. Ultimately, this can result in an increased overall system functionality at the end of a resilience cycle and before the start of the next “iteration” . For example, a system such as society can be better prepared for a pandemic situation due to increased pharmaceutical production capacity or platforms like plants. From our perspective, the production of recombinant proteins in plants could support the engineering of increased resilience primarily during the prepare and respond stages and, to a lesser extent, during the prevent and recover stages . During the prepare stage, it is important to build sufficient global production capacity for recombinant proteins to mount a rapid and scalable response to a pandemic. These capacities can then be used during the response stage to produce appropriate quantities of recombinant protein for diagnostic , prophylactic , or therapeutic purposes as discussed above. The speed of the plant system will reduce the time taken to launch the response and recovery stages, and the higher the production capacity, square plastic plant pots the more system functionality can be maintained. The same capacities can also be used for the large-scale production of vaccines in transgenic plants if the corresponding pathogen has conserved antigens. This would support the prevent stage by ensuring a large portion of the global population can be supplied with safe and low-cost vaccines, for example, to avoid recurrent outbreaks of the disease. Similarly, existing agricultural capacities may be re-directed to pharmaceutical production as recently discussed . There will be indirect benefits during the recover phase because the speed of plant-based production systems will allow the earlier implementation of measures that bring system functionality back to normal, or at least to a “new or next normal.” Therefore, we conclude that plant-based production systems can contribute substantially to the resilience of public healthcare systems in the context of an emergency pandemic.The cost of pharmaceuticals is increasing in the United States at the global rate of inflation, and a large part of the world’s population cannot afford the cost of medicines produced in developed nations . Technical advances that reduce the costs of production and help to ensure that medicines remain accessible, especially to developing nations, are, therefore, welcome. Healthcare in the developing world is tied directly to social and political will, or the extent of government engagement in the execution of healthcare agendas and policies . Specifically, community-based bodies are the primary enforcers of government programs and policies to improve the health of the local population . Planning for the expansion of a bio-pharmaceutical manufacturing program to ensure that sufficient product will be available to satisfy the projected market demand should ideally begin during the early stages of product development.

Efficient planning facilitates reductions in the cost and time of the overall development process to shorten the time to market, enabling faster recouping of the R&D investment and subsequent profitability. In addition to the cost of the API, the final product form , the length and complexity of the clinical program for any given indication , and the course of therapy have a major impact on cost. The cost of a pharmaceutical product, therefore, depends on multiple economic factors that ultimately shape how a product’s sales price is determined . Plant-based systems offer several options in terms of equipment and the scheduling of upstream production and DSP, including their integration and synchronization . Early process analysis is necessary to translate R&D methods into manufacturing processes . The efficiency of this translation has a substantial impact on costs, particularly if processes are frozen during early clinical development and must be changed at a subsequent stage. Process-dependent costs begin with production of the API. The manufacturing costs for PMPs are determined by upstream production and downstream recovery and purification costs. The cost of bio-pharmaceutical manufacturing depends mostly on protein accumulation levels, the overall process yield, and the production scale. Techno-economic assessment models for the manufacture of bio-pharmaceuticals are rarely presented in detail, but analysis of the small number of available PMP studies has shown that the production of bio-pharmaceuticals in plants can be economically more attractive than in other platforms . A simplified TEA model was recently proposed for the manufacture of mAbs using different systems, and this can be applied to any production platform, at least in principle, by focusing on the universal factors that determine the cost and efficiency of bulk drug manufacturing .Minimal processing may be sufficient for oral vaccines and some environmental detection applications and can thus help to limit process development time and production costs . However, most APIs produced in plants are subject to the same stringent regulation as other biologics, even in an emergency pandemic scenario . It is, therefore, important to balance production costs with potential delays in approval that can result from the use of certain process steps or techniques. For example, flocculants can reduce consumables costs during clarification by 50% , but the flocculants that have been tested are not yet approved for use in pharmaceutical manufacturing. Similarly, elastin-like peptides and other fusion tags can reduce the number of unit operations in a purification process, streamlining development and production, but only a few are approved for clinical applications . At an early pandemic response stage, speed is likely to be more important than cost, and production will, therefore, rely on well characterized unit operations that avoid the need for process additives such as flocculants. Single-use equipment is also likely to be favored under these circumstances, because although more expensive than permanent stainless-steel equipment, it is also more flexible and there is no need for cleaning or cleaning validation between batches or campaigns, allowing rapid switching to new product variants if required. As the situation matures , a shift toward cost-saving operations and multi-use equipment would be more beneficial.An important question is whether current countermeasure production capacity is sufficient to meet the needs for COVID-19 therapeutics, vaccines, and diagnostics. For example, a recent report from the Duke Margolis Center for Health Policy24 estimated that ~22 million doses of therapeutic mAbs would be required to meet demand in the United States alone , assuming one dose per patient and using rates of infection estimated in June 2020. The current demand for non-COVID-19 mAbs in the United States is >50 million doses per year27, so COVID-19 has triggered a 44% increase in demand in terms of doses.

Several of the farmers characterized their role as a responsibility

Nearly half of the farmers expressed they were at a big turning point in their personal lives when they decided to farm full time. For example, these farmers had either moved across the country to an unfamiliar place, had quit their office job, and/or had lost an important family member or their childhood home.Farmers interviewed possess embedded knowledge, which is knowledge that comes from living on the land and observing natural processes . To situate this type of knowledge in this particular place , the farmers described their relationship to the land they farmed. Not surprisingly, many of the farmers initially responded with personifications of their land . Initial responses also spoke to farmer perception of their role within the land as well as an expression of romanticism for their land . Among farmers who owned most of the land that they farmed , there was a distinct lack of reference to land ownership; these farmers described their relationship both as a responsibility and as part of a larger human inheritance.All farmers interviewed mentioned direct experience as being one of the most important modes for understanding their landscape, their farming system, and management practices essential to their farm operation. The farmers described this accumulation of experience as “learning by doing,” being “self-taught,” or learning by “trial and error” . These farmers added that in learning by experience, they made “a lot of mistakes” and/or faced “many failures” but also learned from these mistakes and failures – and importantly, that this cycle was crucial to their chosen learning process.

More than half of the farmers interviewed maintained that no guidebook or manual for farming exists; while reading books was viewed as valuable and worked to enhance learning for individual farmers, to farm required knowledge that could only be gained through experience.Moreover, square pots plastic nearly all the farmers also explicitly commented on the fact that they have never stopped learning to farm . Overall, farmers in this study learned primarily through personal experience and over time, making connections and larger conclusions from these experiences. On-farm experimentation was a critical component of knowledge building as well. Experimentation consisted of methodical trials that farmers implemented at small scales on their farms, and most often directly on a small portion of their fields. Experimentation was often incited by observation , a desire to learn or to increase alignment with their own values, or a need to pivot in order to adapt to external changes. The farmers experimented to test the feasibility of implementing specific incremental changes to their current farming practices before applying these changes across their entire farm. For example, one farmer relied exclusively on trucking in urban green waste compost as part of the farm’s fertility program when she first started farming. However, one year, she decided to allow chickens to roam in a few of the fields; within a few years, those fields were outproducing any other field on her farm in terms of crop yield. She quickly transitioned the entire farm away from importing green waste compost to rotating chickens on a systematic schedule throughout all fields on her farm. This form of experimentation allowed this farmer to move from relying on external inputs for fertility to cycling existing resources within the farm and creating an internally regulated farming system . For this farmer, this small experiment was monumental and shifted her entire farm toward a management system that was more in alignment with her personal farming values.

As she described, “When you look at everything on the farm from a communal perspective and apply that concept of community to everything on the farm . . . it literally applies to every aspect of your life too.”Though this farmer had initially used direct observation to implement raised beds on his farm, as he learned the purpose of raised beds through his own direct experience, he slowly realized – over the course of decades – that raised beds served no purpose for his application. One year, he decided not to shape some of his beds. At the end of the season, he evaluated no real impact on his ability to cultivate or irrigate the row crops on flat ground, and no impact on yield or crop health. In fact, he observed less soil compaction and more aeration due to fewer passes with heavy machinery; and, he saved time and fuel. The transition to farm on flat ground took several seasons for this farmer, but over time, his entire farm operation no longer used raised beds to grow row crops. This breakthrough in farming for this particular farmer was informed by personal experience and guided by careful experimentation.Second to experience, observation also influenced the farmer learning process. Whereas direct experience is usually immersive, and embedded within a larger social context, observation is a detached, mechanical form of knowledge production, where a farmer registers what they perceive to transpire . For example, farmers cited observing other farmers in a multitude of ways: “By watching other farmers, I really mean I’d just drive around and look. I’d see what tools they were using;” or “If I saw someone working in the field, I would stop my car on the side of the road to see what people are doing;” or “I really would just observe my father farm,”) as well as making observations about the status of their land . Several of the farmers summed up their cycle of learning as a cycle of observation, trial, feedback, observation, trial, feedback, etc . The farmers frequently mentioned fellow farmers as a source of learning as well. However, several of the farmers clarified that this type of learning did not necessarily involve talking to fellow farmers. One farmer shared that he learned certain farming practices from a neighbor farmer through distant observation and then borrowed ideas he subsequently applied on his farm; to achieve this, he admitted that he had never really talked to the other farmer directly.

Another farmer noted that he would “go back at night if they [another farmer] left their equipment in the field and just study how it was set up, so I [he] could see what was going on.” Based on interviews with other farmers, farmer-to-farmer knowledge exchange often consisted of detached observation rather than personal conversation or direct contact with another farmer.During the initial field visit, the farmers shared their definitions of soil health. Across all farmers interviewed , responses appeared mechanical and resembled language disseminated by government entities such as the Natural Resources Conservation Service . As such, most responses emphasized building soil organic matter, promoting biological activity, maximizing diversity, and minimizing soil disturbance. During the in-depth interview, farmers shared specific indicators used to evaluate soil health on their farms. These responses were varied compared to definitions of soil health and were generally based on observation and personal experience. Generally speaking, the farmers relied heavily on their crops and on the health of their crops to inform them about the basic health of their soil. In fact, the farmers cited using their crop as their foremost indicator for gauging optimum soil health. One farmer shared, “Mostly, I’m looking at the plants, if the color of green on a particular leaf goes from shiny to matte, plastic grow pots or slightly gray undertone to it. These subtle cues, I pick up from just looking at my crops.” The growth habit of weeds within and around fields was also cited as an indicator of soil health. For example, one farmer explained, “I’m looking at how the weeds are growing at the edges of the field; in the middle of the field. Is there a difference between what’s happening around the edges and what’s happening in the field?” Some farmers also frequently relied on cover crops as indicators for determining soil health and soil behavior. When acquiring new fields, for example, the farmers tended to first grow cover crops to establish a baseline for soil health and also understand soil behavior and/or soil type. The farmers also used cover crop growth habits to gauge the status of soil health and soil fertility for a particular field before planting the next iteration of crops. As one farmer elaborated, “I’m judging a field based on how a cover crop grows. It’s one thing if you’re planting a nutrient-intensive crop in a field, but if you have a cover crop in the field and there’s a swath that’s this tall and another swatch that’s only this short, then you know there’s something seriously different about that section of field and the soil there.”In addition to crop health and cover crop growth patterns, the farmers used other biological and physical indicators to determine the health of their soils. Presence of “soil life,” including earthworms, arthropods, fungi, was used as a key biological indicator of soil health by most farmers . For most of the farmers, this was often both a visual and tactile experience, as one farmer described, “Being able to pick up a bunch of soil and see the life in it.

If I can see earthworms, if I can see arthropods, if I can see lots of fungus, then I know that’s pretty good soil, that that’s working well.” Soil structure and soil crumble were also flagged as good physical indicators of soil health by more than half of the farmers . Farmers interviewed determined soil structure in a variety of ways, which included: 1) observing soil behavior while on the tractor; 2) touching soil directly, by hand; 3) digging a small hole to observe its vertical profile; or 4) observing how water drains in a field following rain or irrigation. A majority of the farmers explicitly stated that they did not rely on soil tests to provide information regarding the health or status of their soils; only a handful of the farmers communicated that they actively used soil tests. The farmers who did not use soil tests noted that commercially available soil tests were often inaccurate, not calibrated to their scale and/or type of operation, lacked enough data points to be useful, and/or did not provide any additional information that they were not able to already readily observe day-to-day or long term on the farm.The organic farmers in Yolo County that were interviewed for this study demonstrated wide and deep knowledge of their soil and farming systems. Results show that white, first- and second-generation farmers that farm alternatively accumulate substantive local knowledge of their farming systems – even within a decade or two of farming. These particular organic farmers demonstrated a complex understanding of their physical environments, soil ecosystems, and local contexts that expands and complements other knowledge bases that inform farming systems. While the content and application of farmer knowledge may be locally specific , below we consider aspects of this case study that may be more broadly applicable. First, we discuss emergent mechanisms for farmer knowledge formation using existing frameworks in the social-ecological systems literature, and also summarize key features of farmer knowledge that coalesced from the results of this study.To further examine how farmers in this study acquire and incorporate their knowledge within their farm operation, we first explore emergent mechanisms that underpin farmer knowledge formation. Because farmer knowledge encompasses knowledge of both social and ecological systems – and the interactions thereof – it is useful to draw upon existing frameworks from the social-ecological systems literature in order to trace the process of farmer knowledge formation among farmers in our case study. Briefly, social-ecological systems recognize the importance of linking social and ecological processes to capture interactions between humans and the environment; importantly, existing literature within SES studies also emphasizes the interactive and adaptive feedback among social and ecological processes that link social and ecological system dynamics . Boons offers a conceptual guide for identifying social-ecological mechanisms, which adapted to our case study provides a starting point for tracing aspects of farmer knowledge formation. Here, social-ecological mechanisms for farmer knowledge formation refer to – on the one hand, social and cultural phenomena that influence farmer knowledge and their personal values – on the other, farmers’ observations of and experiences with environmental conditions and ecological processes on their farms that influence their knowledge and their values – and the interactions thereof .

The time taken and success of the experimental chick to reach the confined companion birds was recorded

Later in the essay, I model the evolution of crop-based knowledge and its application to other crops explicitly. New ideas generated from growing one crop benefit farm operators in producing other crops as well. The more crops have in common, the more benefit farmers obtain from applying knowledge across crops. If knowledge evolves independently across crops, producers are less likely to master the production of a large number of crops. For example, if learning about almond production is independent of learning about strawberry production, the probability that a farmer is knowledgeable about both is small. So, learning will lead to specialization. Specialization can also be manifested as focusing on a subset of crops that are similar in agronomic characteristics because farmers can apply knowledge across these crops. Following the same reasoning, this model has implications for the number of farms. Assuming there is a minimum acreage required for each crop to establish production, farmers will exit production if their optimal land demand is smaller than the crop-specific threshold. A faster learning process results in a larger variation in productivity because farmers have a larger probability to increase their knowledge. If we consider the number of farms that produce a specific crop, a larger variation in productivity means that there are more farms exited from production due to lack of knowledge. If the demand of a crop is fixed or increases more slowly than the evolution of knowledge, more farms will exit and the number of farms will decrease. The model and implications are presented in section 2. Numerical simulations illustrating the effect of demand- and supply-side factors on the equilibrium path of farm structure are included in section 3 and section 4 concludes.Hens housed in conventional cage systems produce the majority of eggs worldwide, square plant pots however in recent years many countries have shifted to alternative production systems, such as cage-free aviaries .

Conventional cages house small groups of about 6-7 hens in cages with about 67-86 square inches of space per bird . Cage-free aviaries, on the other hand, house hens in large flocks with approximately 144 square inches of space per hen . The shift away from conventional cages is due to increasing consumer preference for cage-free eggs, resulting from consumers’ concerns about the welfare of laying hens housed in conventional cages . Consumers perceive hens from cage-free systems to have enhanced welfare when compared to hens in conventional cages, despite many consumers being unaware of the meaning behind egg labels or the differences between different production systems . This public perception has also coincided with several states passing legislation that aims to phase out the use of conventional cages. California’s Proposition 12 mandates that all eggs produced and sold in California are cage-free by 2022. Similar legislation has followed, including bills passed in Colorado , Michigan , Oregon , and Washington . Cage-free systems have numerous welfare benefits when compared to conventional cages, including the opportunity to perform natural behaviors such as wing flapping, dust bathing, perching, and flying . However, cage-free systems also come with drawbacks. Adult hens housed in commercial aviaries are prone to injuries including keel bonefractures, which may occur during collisions with tiers, perches, and other features of the aviary . There is a great need to determine why collisions are so common in cage-free aviary systems and what management solutions can be implemented to reduce injuries in cage-free flocks. Young hens, or pullets, are housed in a rearing system for the first 15 to 18 weeks of life before being moved to their adult laying system. The complexity of the pullet rearing environment and early access to vertical space has been shown to play a role in reducing keel bone fracture prevalence in adult hens .

Gunnarsson et al. proposed that rearing chicks on the floor, without access to perches or platforms, impairs the development of spatial cognition. One possibility is that this impairment to spatial cognition could contribute to failed landings and navigation of commercial aviaries, increasing the incidents of fractures. Many studies since Gunnarsson et al. have continued to provide evidence that the complexity of rearing environment influences performance on spatial cognition tasks . However, none of these studies have specifically looked at the development of depth perception and its potential relation with early exposure to vertical space. A deficit in precision of depth perception could explain the occurrence of failed landings and falls, due to an inability to properly gauge the distance to fly or jump.The visual cliff has been used for the evaluation of depth perception and differential visual depth threshold since its invention by Walk et al. in 1957. It utilizes two depths, a shallow side and a deep side, and can be adapted for a variety of species. The subject is placed between the shallow and deep side, in the center of the table. The shallow side is typically level or about a few centimeters below the starting point of the subject, however, the bottom of the deep side is far below the subject. The deep side is covered with a sheet of plexiglass so the subject can perceive this depth but, unbeknownst to the subject, is not in danger of falling. Many designs employ the use of checkerboard patterns to provide visual perspective, allowing for an easy determination of depth. The behavior of the subject is recorded to determine their ability to differentiate the depths and avoid “falling down” the perceptual precipice created by the deep side of the visual cliff.

The visual cliff is a widely used test of depth perception that has the advantage of involving a clear, straightforward choice: Does the animal move to the shallow or deep side? The test is not physically challenging for the subject and there is no training required. Therefore, many animals can be tested and there are no confounding variables of learning or physical ability. Despite these advantages, the visual cliff is not free of flaws. The plexiglass over the “deep side” of the cliff can provide tactile information about the presence of a barrier if the subject comes in contact with the surface. There is also a potential for reflection on the plexiglass, giving a visual indicator of an additional surface over the “deep side.” If the subject can detect the plexiglass barrier through either visual or tactile information, the illusion of a cliff ceases and the test no longer compares the subjects’ reaction to differential depths. Using the visual cliff paradigm, garden pots square chickens have been found to have excellent depth perception, preferring the shallow side of a visual cliff significantly more than the deep side from as early as one day of age . Additionally, four day old chicks readily jump down a drop off of less than 10 inches to join their companions but hesitate if the drop is more than 16 inches . Chicks demonstrate a 2 inch threshold for differential visual depth, meaning, chicks perceive a difference in depth only when the discrepancy is 2 or more inches . Unlike humans, chickens do not require binocular vision, or stereopsis, to perceive depth. Chicks with monocular vision are able to perceive depth as well as their binocular counterparts, with both groups choosing the shallow over the deep side of the visual cliff significantly more . Although all birds have binocular vision, there is no evidence that birds other than certain birds of prey use stereopsis to acquire information on relative depth . Instead, birds use monocular cues such as motion parallax and interposition to judge relative depth . Walk and Gibson provided evidence that the ability to perceive depth is innate in multiple species. However, certain factors during development, such as light and monocular deprivation, can cause impairments in depth perception . This raises an interesting question: can other differences in visual experience, such as reduced experience with height and depth, alter depth perception abilities?The relationship between spatial cognition and rearing environment in laying hens has been evaluated in previous studies using a variety of tests including the jump test, hole board task, radial maze, and detour paradigm. In order to better understand these tests, the definition of spatial cognition must be addressed. Spatial cognition is defined as a multifaceted topic entailing the perception, processing, and interpretation of objects, space, and movement . Spatial cognition encompasses many different aspects of visual perception such as spatial memory and navigation, determining the orientation of objects, and perceiving depth . Despite this diverse array of topics under this broad term, many researchers outside the field of cognition discuss spatial cognition without specifying which aspect they are investigating. By failing to properly define terms or control for a precise aspect of spatial cognition, researchers can inadvertently measure unintended variables and make inaccurate conclusions. Over the last two decades there has been a great deal of interest in how the complexity of rearing environments may affect the development of spatial cognition skills/abilities of laying hens. Multiple tests have been used to evaluate different aspects of spatial cognition, including: jump test, hole board test, radial maze, and detour task.

Each test has targeted objectives aimed at specific cognitive processes, and comes with advantages and disadvantages.Gunnarsson et al. developed a jump test to evaluate spatial cognition in laying hens, however the specific aspect of spatial cognition being explored was not addressed. The jump test involves placing a feeder on an elevated platform with or without a second, lower platform to assist in access. Pullets’ latency to reach the food reward is measured to assess their ability to navigate elevated structures. Chicks were raised with either early access or late access to perches of varying heights. At 9 weeks old, all birds were placed on perches 2 to 4 times daily to encourage perch use. At 15 weeks all food in the home pen was placed on a 60 cm high tier, which the pullets could access from the ground or via nearby perches. At 16 weeks old, food deprived birds were presented with a feeder on an elevated tier in a testing pen. The height of this tier increased by 40 cm each trial up to 160 cm high and an intermediate tier was provided to aid in access for two of these trials. The time it took for the birds to reach the food after entering the testing pen was recorded and used to measure their success and ease at reaching the elevated food reward. There was no difference in the time that it took birds from different rearing treatments to successfully reach the food on the 40 cm tier. However, there was a significant difference between the treatments when the difficulty of the task increased, with more successes and shorter latencies to access the food from the early access to perch group. The number of birds that successfully reached the tier decreased with increasing difficulty for both rearing treatments. Norman et al. also conducted a jump test with chicks reared from hatching in either a control or enriched treatment. The control treatment had no elevated structures while the enriched treatment had eight wooden perches arranged in an A-frame structure and a ramp leading up to a platform with additional perches. Like the Gunnarsson et al. study, this test also utilized staggered tiers that could be used to reach a reward, however the tiers were opaque and companion chicks were used as a reward instead of food. Chicks were tested at 14- 15 and 28-29 days of age by placing them in a compartment with familiar chicks from their home pen in a mesh holding container. For the first test the companion chicks were on a 20 cm high tier and there were no other tiers present. For the second test, the 20 cm tier remained, however the companion chicks were placed on an additional 40 cm high tier. Norman et al. found that there was a significant difference between age groups on success of reaching the reward, however there was no effect of rearing treatment. In addition, there were no significant differences in the latency to complete the jump test between treatment or age.

The estimated impacts on all five environmental dimensions are positively correlated with farm acreage

It established national standards for organic certification and took enforcement actions if there were violations of the standards. Organic growers are prohibited from using certain production practices that have significant negative environmental impacts. However, the regulation of organic agriculture is process-based, not outcome-based, and the regulatory agency does not monitor or enforce standards on environmental outcomes such as biodiversity and soil fertility . Another source of concern comes from the way organic farming practices may change as the sector grows. As pointed out by Läpple and Van Rensburg , late adopters of organic agriculture are more likely to be profit driven and care less about the environment than early adopters. And, the prices of organic products remained at least 20% higher than their conventional counterparts in 2010 , which could encourage additional entry. Therefore, unintended consequences might emerge and organic agriculture could be less environmentally friendly than commonly perceived. There is some evidence of this in the scientific literature. Organic agriculture has been reported to have higher nitrogen leaching and larger nitrous oxide emissions per unit of output than conventional agriculture . Certain pesticide active ingredients used in organic agriculture have been found to be more toxic than conventional AIs in laboratory environments and field experiments . For example, Racke reviewed the discovery and development of spinosad, a natural substance used to control a wide variety of pests, grow bags garden and observed that spinosad was approved based on its low mammalian toxicity. However, Biondi et al. found that spinosad is more harmful to natural predators than pesticides used commonly in conventional agriculture. As the case of spinosad demonstrates, pesticide use in organic agriculture could impose more environmental impact than conventional agriculture in one or more dimensions.

Therefore more evidence is needed to evaluate the environmental impact of organic farming practices and its determinants. In this essay, I provide novel evidence regarding the impact of pesticide use in organic and conventional agriculture on different dimensions of environmental quality, and quantify the difference between the environmental impacts of pesticide use in the two production systems in California. In addition, I examine the relationships between farmers’ pesticide-use decisions and their experience and farm size. California is the leading state for organic agriculture in the U.S., accounting for 12% of certified organic cropland and 51% of certified organic crop value nationally in 2016 . The number of certified operations and cropland acreage in California doubled between 2002 and 2016. State organic crop sales increased almost tenfold at the farm level, in real terms, during the same time period . This essay uses field-level pesticide application records and a fixed-effects model to analyze changes in the environmental impacts of pesticide use for both organic and conventional fields over 21 years. The database covers all registered agricultural pesticide applications in California, and contains over 48 million pesticide application records for over 64,000 growers and 781,000 fields from 1995 to 2015. In total, data from more than 55,000 organic fields and 11,000 growers who operated organic fields are analyzed in this essay. The Pesticide Use Risk Evaluation model is used to assess the environmental impacts of pesticide use . The results show that the environmental impact of pesticide use per acre is lower in organic fields across all of the environmental dimensions for which PURE indexes are defined: surface water, groundwater, soil, air, and pollinators. The difference in the impact on air is the smallest because natural pesticides are not systematically different from synthetic pesticides in terms of volatile organic compound emissions.

The measure of farmer experience is positively correlated with estimated impacts per acre on surface water and groundwater, and negatively correlated with estimated impacts on soil, air, and pollinators but the difference associated with variation experience are smaller than the estimated effect of whether the field is organic or not by orders of magnitude. Environmental impacts and the difference between organic and conventional production vary by crop. Four major California crops, lettuce, strawberries, processing tomatoes, and wine grapes, are examined in detail.The benefit from organic agriculture is partially paid by consumers through a price premium for organic products . Whether organic production is the most cost effective way to reduce the environmental impacts of agriculture is not the focus of this essay. However, readers can gain some insight into the performance of organic agriculture by comparing the cost of alternative tools and their effects on environmental quality. The contribution of this essay is threefold. First, it links the environmental impacts of organic crop production directly to pesticide applications. To the best of my knowledge, no other studies have examined this relationship. Previous literature provided abundant evidence on the environmental impact of organic agriculture as a system but failed to quantify the impact of specific farming practices . Here, AIs and their contributions to environmental impacts are identified individually, which enhances the understanding of the differences in pesticide use between organic and conventional agriculture and how they vary across crops. Second, this essay uses the PURE model to assess the environmental impacts of pesticide use . Compared to the risk quotient approach, which is another common method in the literature , the PURE model provides a more salient measure of environmental impacts by incorporating additional environmental information, such as the distance from the pesticide application to the nearest surface water. The PURE model calculates risk indices for five environmental dimensions: surface water, groundwater, soil, air, and pollinators.Third, by using the Pesticide Use Report database, this essay’s findings are based on the population of pesticide application data.

Prior works include meta-analyses that cover numerous field experiments and commercial operations examined for a crop or a small geographic area over a limited period of time. California’s agriculture is characterized by many crops and diverse climate and soil conditions. The comprehensive coverage of the PUR database eliminates any sample selection issue. The rest of the essay is organized as follows: section 2 introduces the PUR database and PURE model and presents summary statistics of historical pesticide use, section 3 provides the identification strategy to tackle grower heterogeneity, section 4 presents industry level and crop-specific estimation results, and section 5 concludes.The Pesticide Use Reports database, created and maintained by the California Department of Pesticide Regulation, is the largest and most complete database on pesticide and herbicide use in the world. Growers in California have reported information about every pesticide application since 1990. In this essay, pesticide uses prior to 1995 are not evaluated due to data quality issues identified previously . More than 3 million applications are reported annually. Reports include information on time, location, grower id, crop, pesticide product, AIs, quantity of product applied, treated acreage and other information, for every agricultural pesticide application. A “field” is defined as a combination of grower_id and site_location_id, which is a value assigned to each parcel by its grower. To obtain the USDA organic certification, growers must meet requirements on several aspects of production: pesticide use, fertilizer use, and seed treatment. The requirement on pesticide use is burdensome because pesticides approved in organic agriculture are expensive and have less efficacy. Pesticide and fertilizer AIs used in organic agriculture undergo a sunset review by the National Organic Standards Board every five years and the main criterion is whether the ingredient is synthetic or not. In general, grow bag for tomato it is not reasonable for growers to use those pesticides exclusively but not apply for the organic certification, given higher price and lower efficacy of those pesticides. Therefore, growers who comply with the NOP’s requirement on pesticide use can be viewed as equivalent to certified organic growers for the data sorting purpose. In Wei et al. , authors located individual organic fields using this approach. Namely, any field without a prohibited pesticide applied for the past three years is considered organic. Their paper compared organic crop acreage from PUR to other data sources and showed that pesticide use records alone can be used to identify organic crop production. Environmental conditions for each field and toxicity values for each chemical are used to calculate the value of the PURE index developed by Zhan and Zhang . The PURE index has been used in previous studies to represent environmental impacts of pesticide use . The PURE index indexes environmental impacts of pesticide use in five dimensions: surface water, groundwater, soil, air, and pollinators. For each dimension, the PURE index is calculated on a per acre basis and it varies from 0 to 100, where 0 indicates trivial impact and 100 rep-resents the maximum impact. Excluding air, the PURE index is the ratio of the predicted environmental concentration to toxicity to the end organisms. The PEC estimates the effect of the pesticide application on the concentration level for chemicals in the environmental sample. The toxicity values cover both acute measures, such as LD50, and long-term measures, such as No Observed Effect Concentration and acceptable daily intake for humans. End organisms are fish, algae, and water fleas for surface water, humans for groundwater, earthworms for soil, and honeybees for pollinators. The PURE index for air is calculated based on potential VOC emissions, which is a common measure of airborne pollutants emitted from agriculture production .

The emission of VOCs is defined as the percentage of mass loss of the pesticide sample when heated. Unlike toxicity, VOC emissions do not have a strong link to whether the AIs are synthetic or natural. For example, the herbicide Roundup®, which contains glyphosate, has zero VOC emissions because there is no evaporation or sublimation. Meanwhile, sulfur products, which are widely used in organic agriculture, also have zero VOC emissions. The PURE index only captures impact from active ingredients in pesticides. Inert ingredients, which are not covered in this essay, are also found to have negative impacts on the environment and on pollinators in particular .Conventional and organic growers adopt different pest management practices. As specified by the NOP, organic growers shall use pesticides only when biological, cultural, and mechanical/physical practices are insufficient. Chemical options remain essential for organic pest management programs. Currently over 7,500 pesticide products are allowed for use in organic crop and livestock production, processing, and handling. In Figure 1.1, the acreage treated with different types of pesticides is shown on the left y-axis for both conventional and organic fields. Treated acreage is divided evenly among types for AIs that belong to multiple pesticide types, such as sulfur, which is both a fungicide and an insecticide. The average number of pesticide applications per acre, which is defined as the total treated acreage divided by the total planted acreage, is plotted against the right y-axis in both panels. This is a common measure of pesticide uses that controls for differences in application rate among pesticide products . If multiple AIs are used in a single application, the treated acreage is counted separately for each AI. Planted acreage remained stable for conventional agriculture over the study period, so changes in the average number of applications per acre were due to changes in treated acreage. Organic planted acreage grew dramatically, but treated acreage increased even more. The number of applications per organic acre rose from 2 to 7. Figure 1.1 provides a highly aggregated view of pesticide use as different pesticide products with different AIs and application rates are used in conventional and organic fields. Examining the Figure 1.1 , insecticide is the most used pesticide type, accounting for 36% and 44% of total treated acreage in conventional and organic agriculture respectively in 2015. Herbicide is the second most used type of pesticide in conventional fields. In contrast, organic growers’ use of herbicides is limited. Fungicide is another major pesticide type, and sulfur is the most used fungicide AI in both conventional and organic fields. Sulfur is an important plant nutrient, fungicide, and acaricide in agriculture. The pesticide group “others” primarily includes plant growth regulators and pheromones. Disaggregating insecticide use provides more detailed insight into the nature of the difference between conventional and organic production. Figure 1.2 plots the insecticide treated acreage by physiological functions affected . Only three groups of insecticides are available to organic growers, while six are available to conventional growers. In conventional agriculture, 67% of treated acreage in 2015 was treated with insecticides that targeted nerves or muscles, which include organophosphates, pyrethroids, and neonicotinoids.

Our results demonstrate that this measurement is reproducible and provides a useful metric of shoot growth

The second two chapters describe a novel high precision O2analyzer that was initially developed to measure AQ and a related general purpose data acquisition system that was developed alongside the O2analyzer.Automated image analysis techniques enable the non‐ destructive phenotyping of large plant diversity panels. The 1001 Genomes Project is one example of such a panel; it comprises 1135 sequenced natural accessions of Arabidopsis thaliana Heynh. sampled from a wide range of environments . Combining these high‐quality genetic resources with high‐throughput phenotyping methods enables powerful genome‐wide association studies. One technique for evaluating the developmental traits of such large diversity panels is growing the accessions in agar‐ filled culture dishes. This allows root traits to be quantified quickly using high‐throughput image analysis methods. The plants are not destroyed or contaminated in the process and can therefore be photographed at different stages of growth. One disadvantage of this approach is that the rosettes are askew, so rosette area is usually not assessed even when the leaves are visible in the photographs. Quantifying both root and shoot characteristics is usually preferable because many plant processes involve both organs; for example, nitrogen acquisition and allocation involves root uptake from the rhizosphere, assimilation into organic forms in both the roots and shoots, and translocation throughout the plant . Studying this process requires precise measurements of both the roots and shoots, plastic pot which has previously been technically difficult. Here, we show that leaf area measured from plate images is accurate even when the rosettes are somewhat askew and can therefore be used for rapidly phenotyping large image sets of Arabidopsis seedlings. As part of a larger study to examine the genetic basis of plant adaptation to different nitrogen forms and concentrations in the rhizosphere , we measured leaf area from more than 2000 images of Arabidopsis seedlings on agar plates.

To determine whether rosette area measurements taken from plate images are sufficient for shoot phenotyping, we compared them to both measurements from images of the rosettes photographed from directly overhead and seedling mass. To compare the overhead and plate image rosette area measurements, six different natural Arabidopsis accessions were planted on agar plates containing a base nutrient solution consisting of 2 mM CaCl2, 2 mM KH2PO4, 2 mM MgSO4, 1 mM KCl, 0.75 mM MES, 0.5 μM CuSO4, 2 μM MnSO4, 25 μM H3BO3, 42 μM FeNaDTPA, 2 μM ZnSO4, 0.5 μM H2MoO4, and 0.8% agar. Different concentrations of sucrose were added to the base media to ensure that there would be a variety of different‐sized seedlings. After planting, the plates were kept at 4°C for four days and then placed into a growth chamber with a 14‐h day/10‐h night cycle. After 12 days of growth, rosette area of the plants was measured in two ways, first from photographs of the seedlings in the plates and second from a photograph of the rosettes placed upright on paper. All photographs from this image set were taken with a Pixel 3A cellphone camera . A total of 58 seedlings were grown and measured this way. As part of a larger study investigating plant responses to different nitrogen forms and concentrations in the rhizosphere, we quantified the rosette area from plate images and compared it with seedling mass. A total of 148 Col‐0 seedlings were grown under 10 different nitrogen conditions with either nitrate or ammonium as the sole nitrogen source at concentrations ranging from 0.05 mM to 5 mM. After 12 days of growth, the plates were photographed and the seedlings, including both roots and shoots, were excised and weighed.As another part of the aforementioned study, more than 2000 images of Arabidopsis seedlings on agar plates were collected. This image set was generated from an experiment in which the 1135 natural accessions of the 1001 Genomes Project were grown under four different nitrogen conditions: 0.1 mM and 1 mM nitrate using KNO3 as the sole nitrogen source and 0.1 mM and 1 mM ammonium using NH4HCO3 as the sole nitrogen source.

The seedlings were grown under long‐day conditions . The closed plates were photographed 12 days after planting using an EOS Rebel digital camera fitted with an 18–55 mm EF‐S lens . The root traits, including primary root length and number of lateral roots, were estimated from the images using RootNav, image analysis software that allows the semiautomated quantification of complex root system architectures .Some of the image sets did not have a red two‐dimensional scale present, making them unsuitable for rosette area measurement using existing methods such as Easy Leaf Area . We developed our own image processing workflows in Python, which were able to use a scale if it was present or, alternatively, to detect the area of the agar plate to serve as a scale. These workflows use the PlantCV package for most of the image‐ processing functions. The general steps in the workflow are cropping the image to the plate region, leaf identification and pixel counting, and scale identification. Cropping the image to the region of interest was done to save processing time and eliminate background features that could be mistaken for objects of interest. This was done using binary thresholding or edge detection to separate the agar‐filled culture dish from the background . The choice to use edge detection to identify the plate versus binary thresholding was dependent on the image set used. The detection of the agar plate also allows for the rotation of the image if the plate is not correctly aligned within the image. Leaf identification was performed using binary thresholding and object detection . The specific color channel and threshold value used to identify the leaves varied between the different image sets due to different background and lighting conditions, but as long as the images within a set are taken against the same background and with the same lighting conditions then these values should remain consistent for processing the entire set. For the validation images, the “C” channel of CMYK color space was used to identify the leaves, whereas in the diversity panel image set the “B” channel of L*a*b* color space was used.

To determine the appropriate threshold values for an image set, we used the plot histogram function in PlantCV. This function is used to visualize the range of pixel intensity in the color channel of interest. For image sets with lower contrast, grow bag the histogram equalization function was used to make thresholding easier. To simplify leaf identification, an ROI was defined for the top section of the cropped image where the leaves are found. Objects detected within the ROI were grouped into six shoots using clustering. The image moment of each shoot in the binary image was used to calculate the number of pixels that made up the leaves in each seedling. Scale identification was performed either by using a reference scale that was placed within the image or using the plate itself as a scale. For dedicated reference scales, the same general process that was used to identify and count pixels of the leaves was used for the scale. Once the number of pixels in the scale or the number of pixels making up the plate were measured, the rosette area could be calculated. For the large image set of the diversity panel, we automated the workflow in a Python script, which took approximately four hours to process all 2000 images. We were able to estimate rosette area for over 90% of the seedlings that successfully germinated, resulting in 8964 individual measurements. Many of the seedlings that were not measurable had fallen below the middle of the plate and were not within the defined ROI. It is important to note that the parameters used for the various transformations, such as thresholding, grayscale conversion, and scale calculation, are specific to the image set. These parameters would need to be modified when using a different image set, but the general steps would still apply.The strong positive linear relationship between the rosette area measurements taken from the plate images and those taken from photographs of excised rosettes demonstrates that using plate images for shoot trait analyses can yield meaningful phenotype data with minimal effort. While the correlation between rosette area measured from plate images and seedling mass was not as strong, it was still sufficient to indicate that this is aviable method for estimating plant growth. A lower correlation between these measurements is also to be expected because the seedling mass includes both shoot and root mass and is therefore not as specific to shoot growth as is the rosette area. We were also able to apply this analysis to an image set generated for the purpose of root phenotyping, allowing us to obtain additional valuable phenotypic information. The rosette area measured using this technique across a large Arabidopsis diversity panel was found to be heritable and showed a significant response to rhizosphere nitrogen form and concentration. These results were in line with other developmental traits measured using established techniques, such as primary root length measured using RootNav . Agar plate images are widely used for the non‐ destructive measurement of Arabidopsis root traits. Here, we showed that useful shoot trait information can also be collected from these same images, enabling simultaneous root and shoot phenotyping. This can be done quicklyand is easily automated, making it suitable for large image sets. The images can be captured and analyzed without the need for specialized imaging equipment or dedicated phenotyping facilities. The agar plate itself can be used as a scale, enabling the analysis of image sets without dedicated two‐dimensional scales. With the procedures described here, image sets generated for root phenotyping in other studies might also provide data about shoot phenotypes without much additional effort.

The field of robot guidance has seen great advancement thanks to advances in Machine Vision and Machine Learning. Palletizer systems, comprised of vision guided pick-and place robots along a conveyor have become commonplace in manufacturing and logistics, reducing labor costs and handling heavier loads than humans are capable of handling . In the field of robotic surgery, neural networks have been developed to automate repetitive tasks based on input from cameras, reducing surgeon fatigue during long procedures. Permeant magnets could be a useful positioning aid in cases where clear line of sight is not available. For example, surgical robots have been incorporated in the insertion of pedicle screws during spinal fusion surgery, but only as far as aligning a surgical tool to the spine. The actual insertion of screws is highly dependent on the feel and experience of the surgeon. If some part of the screw could be magnetized, magnetometers could provide useful information about its position in the body. There has been some work on magnetic object tracking. Wahlstrom used an array of 4 magnetometers to track magnets from the opposite side of a piece of plywood using an Extended Kalman Filter, with RMS position error of 4.95mm and orientation error of 1.85 degrees. This work will attempt to calculate magnet positions with greater accuracy using a larger array of magnetic field readings. To avoid the increased cost of using a large number of magnetometers simultaneously, one magnetometer is positioned at different locations in 3D space. With readings taken from a large grid of points, existing nonlinear optimization algorithms can be used to compute the position and orientation of the magnets. In order to carry out this task, a system had to be designed and built to position a magnetometer in 3 dimensions. An alternate use that this system was created for was the characterization of magnetic devices fabricated by other members of the Magnetic Microsystems and Microrobotics lab. Measuring fields surrounding MMM lab devices will help in calculating magnetic forces and experimentally validating simulations.Agriculture is a key human activity in terms of food production, economic importance and impact on the global carbon cycle. As the human population heads toward 9 billion or beyond by 2050, there is an acute need to balance agricultural output with its impact on the environment, especially in terms of greenhouse gas production. An evolving set of tools, approaches and metrics are being employed under the term “climate smart agriculture” to help—from small and industrial scale growers to local and national policy setters—develop techniques at all levels and find solutions that strike that production-environment balance and promote various ecosystem services.

Temperature related genes were differentially expressed at the two locations in our study

The amino acid metabolism functional GO category is highly enriched in the group of DEGs between BOD and RNO and more specifically in the top 400 BOD DEGs . Some examples of genes involved in amino acid metabolism that have a higher transcript abundance in BOD berries are phenylalanine ammonia lyase 1 , which catalyzes the first step in phenylpropanoid biosynthesis, branched-chainamino-acid aminotransferase 5 , which is involved in isoleucine, leucine and valine biosynthesis, 3-deoxy-D-arabino-heptulosonate 7-phosphate synthase 1 , which catalyzes the first committed step in aromatic amino acid biosynthesis, and tyrosine aminotransferase 7 , which is involved in tyrosine and phenylalanine metabolism. Included in this group were 44 stilbene synthases , which are part of the phenylpropanoid pathway; these STSs had a higher transcript abundance in BOD berries as compared to RNO berries, with very similar transcript abundance profiles to PAL1 .In a previous analysis, WGCNA defined a circadian clock subnetwork that was highly connected to transcript abundance profiles in late ripening grapevine berries. To compare the response of the circadian clock in the two different locations, we plotted all of the genes of the model made earlier. Most core clock genes and light sensing and peripheral clock genes had significantly different transcript abundance in BOD berries than that in RNO berries at the same sugar level . All but one of these had higher transcript abundance in BOD berries relative to RNO berries. The transcript abundance of other genes had nearly identical profiles .

These data are summarized in a simplified clock model , black flower bucket which integrates PHYB as a key photoreceptor and temperature sensor that can regulate the entrainment and rhythmicity of the core circadian clock, although to be clear it is the protein activity of PHYB, not the transcript abundance that is regulating the clock.The common gene set for both locations represented approximately 25% of the genes differentially expressed with sugar level or location. Presumably these gene sets represent genes that were not influenced by location but were influenced by berry development or sugar level. This study is limited in that only two locations in one season were investigated. As more locations are compared in the future, these gene sets will likely be reduced in size even further. The processes involved in these gene sets or modules included the increase of catabolism and the decline of translation and photosynthesis. It is clear that these processes play important roles in berry ripening. Most of the genes in the genome varied in transcript abundance with increasing sugar levels and berry maturation and most of these varied with the vineyard site. Many of the DEGs were enriched with gene ontologies associated with environmental or hormonal stimuli.Plants are exposed to a multitude of factors that influence their physiology even in controlled agricultural fields such as vineyards. The vineyards in BOD and RNO are exposed to very different environments ; these environmental influences were reflected in some of the DEG sets with enriched gene ontologies. The results from this study are consistent with the hypothesis that the transcript abundance of berry skins in the late stages of berry ripening were sensitive to local environmental influences on the grapevine. While most transcript abundances in berries are largely influenced by genetics or genotype, environment also plays a large role.

It is impossible with the experimental design of this study to determine the amount that each of the environmental factors contributed to the amount of differential expression in these two locations. There were too many variables and too many potential interactions to determine anything conclusively. Replication in other seasons will not aid this analysis as climate is highly variable and will produce different results. All we can say is that these genes were differentially expressed between the two locations, which were likely due to known and unknown factors . As additional studies are conducted indifferent locations and seasons in the future, meta analyses can be employed to provide firmer conclusions. It is possible that some of the DEGs identified in this study resulted from genetic differences between the different Cabernet Sauvignon clones and root stock used in the two locations. Not knowing what these genes might be from previous studies prevents us from drawing any clues. These and other factors most certainly affected the berries to some degree. The data in this study indicated that the grape berry skins responded to multiple potential environmental factors in the two vineyard locations in addition to potential signals coming from the maturing seed. We say potential environmental factors because we did not control for these factors; we associated transcript abundance with the factors that were different in the two locations. The transcript abundance profiles along with functional annotation of the genes gave us clues to factors that were influencing the berries and then associations were made with the known environmental variables. Further experiments are required to follow up on these observations. We were able to associate differences in transcript abundance between the two locations. These DEGs could be associated with temperature, light, moisture, and biotic stress.

Additional factors were associated with transcript abundance involved with physiological responses and berry traits such as seed and embryo development, hormone signaling , phenylpropanoid metabolism, and the circadian clock. In the following sections we discuss in more detail some of the possible environmental factors that were reflected in the enriched gene ontologies found in the gene sets from this study.Light regulates the transcript abundance of many genes in plants. It has been estimated that 20% of the plant transcriptome is regulated by white light and this includes genes from most metabolic pathways. Light is sensed by a variety of photoreceptors in plants; there are red/far red, blue and UV light receptors. PHYB is a key light sensor, regulating most of the light sensitive genes and sensing the environment through red light to far-red light ratios and temperature. PHYB entrains the circadian clock affecting the rate of the daily cycle and the expression of many the circadian clock genes; PHYB induces morning phase genes and represses evening phase genes. Other photoreceptors can entrain the circadian clock as well. PHYB and the circadian clock are central regulators of many aspects of plant development including seed germination, seedling growth, and flowering. The circadian clock influences the daily transcript abundance of genes involved in photosynthesis, sugar transport and metabolism, biotic and abiotic stress, even iron homeostasis. Light signaling was very dynamic in the berry skin transcriptome in the late stages of berry ripening with a higher transcript abundance of many light signaling genes in BOD berries. Many photo receptors that interact with the circadian clock had a higher gene expression in BOD berries. In the circadian clock model, Circadian Clock Associated 1 is an early morning gene and has its highest expression at the beginning of the day. It is at the start of the circadian core clock progression through the day, square black flower bucket whereas the transcript abundance of Timing Of CAB Expression 1 is highest at the end of the day and finishes the core clock progression . In both of these cases, there is a higher transcript abundance of these genes in BOD than in RNO. The evening complex is a multi-protein complex composed of Early Flowering 3 , Early Flowering 4 and Phytoclock 1 that peaks at dusk. None of these proteins, had significant differences in transcript abundance between the two locations . The transcript abundance of ELF3 increased with sugar level and shortening of the day length . ELF3, as part of the evening complex , has direct physical interactions with PHYB, COP1 and TOC1 linking light and temperature signaling pathways directly with the circadian clock. It is interesting that most of the components of the clock showed significant differences in transcript abundance between BOD and RNO, except for the three proteins that make up the evening clock. The transcript abundance profile of PHYB was similar in both BOD and RNO berries , however the changes in transcript abundance with sugar level occurred in BOD berries at a lower sugar level. There was a gradual decline of PHYB transcript abundance with increasing sugar level until the last measurement at the fully mature stage, where there was a large increase in transcript abundance. A very similar profile is observed for Reveille 1 . RVE1 promotes seed dormancy in Arabidopsis and PHYB interacts with RVE1 by inhibiting its expression. PIF7 , interacts directly with PHYB to suppress PHYB protein levels.

Likewise, PIF7 activity is regulated by the circadian clock. PIF7 had higher transcript abundance in the BOD than that of RNO berries and generally increased with increasing sugar level. The transcript abundance of two of the other grape phytochromes did not vary significantly between the two locations or at different sugar levels. PHYC had a higher transcript abundance in RNO berries and did not change much with different sugar levels. Many other light receptors , FAR1 , FRS5 , etc. had higher transcript abundance in BOD berries . Thus, light sensing through the circadian clock is a complicated process with multiple inputs. RVE1 follows a circadian rhythm. It behaves like a morning-phased transcription factor and binds to the EE element, but it is not clear if it is affected directly by the core clock or through effects of PHYB or both. PHYB down regulates RVE1; RVE1 promotes auxin concentrations and decreases gibberellin concentrations. Warmer night temperatures cause more rapid reversion of the active form of PHYB to the inactive form and thus may promote a higher expression/activity of RVE1. Pr appears to accelerate the pace of the clock . It is unclear what role phytochromes might have in seed and fruit development in grapes. Very little is known about the effect of PHY on fruit development in general. In one tomato study, the fruit development of phy mutants was accelerated, suggesting that PHYB as a temperature/light sensor and a regulator of the circadian clock may influence fruit development. Carotenoid concentrations, but not sugar concentrations, also were affected in these mutants. Photoperiod affects the transcript abundance of PHYA and PHYB in grape leaves. In the present study, the transcript abundance of the majority of the photoreceptor genes in berry skins, including red, blue and UV light photoreceptors, had a higher transcript abundance in BOD berries . It is unclear what the effect of PHYB and the circadian clock have on grape berry development. However, there were clear differences between the two locations; it seems likely that PHYB and the circadian clock are key grape berry sensors of the environment, affecting fruit development and composition.The grape berry transcriptome is sensitive to temperature. The RNO berries were exposed to a much larger temperature differential between day and night than BOD berries and were also exposed to chilling temperatures in the early morning hours during the late stages of berry ripening . The transcript abundance of some cold-responsive genes was higher in RNO berry skins than in BOD berry skins , including CBF1. CBF1 transcript abundance is very sensitive to chilling temperatures; it is a master regulator of the cold regulon and improves plant cold tolerance. PIF7 binds to the promoter of CBF1, inhibiting CBF1 transcript abundance, linking phytochrome, the circadian clock and CBF1 expression. Our data are consistent with this model; transcript abundance of PIF7 was higher and CBF1 transcript abundance was lower in BOD berry skins than RNO berry skins .ABA concentrations in plants increase in response to dehydration and ABA triggers a major signaling pathway involved in osmotic stress responses and seed development. ABA concentrations only increase in the seed embryo near the end of seed development when the embryo dehydrates and goes into dormancy. ABA concentrations remain high to inhibit seed germination. The transcript abundance of ABA signaling genes such as ABF2 and SnRK2 kinases increase after application of ABA to cell culture and in response to dehydration in leaves of Cabernet Sauvignon. The data in this study are consistent with the hypothesis that BOD berries are riper at lower sugar levels. The ABA signaling genes in the berry skins had higher transcript abundance in BOD berries indicating that ABA concentrations were higher in BOD than RNO berries even though RNO berries were exposed to drier conditions .

You will likely need to have a helium atmosphere inside the microscope to pursue thermal navigation

The SQUID interference pattern looks reasonably healthy and corresponds to a diameter that is close to the SEM diameter . It is important to remember that it is possible for the Josephson junctions producing nanoSQUIDs to end up higher on the sensor. These might produce healthy SQUIDs but will not be useful for scanning, and discovery of this failure mode comes dangerously late in the campaign, so SQUIDs high up on the pipette are very destructive failure modes. This failure mode is uncommon but worth remembering. If you have access to a vector magnet, such SQUIDs also usually have large cross sections to in-plane magnetic flux, and this can be useful for identifying them and filtering them out. -The capacitances of the Attocube fine positioners are = µF. These scanners have a range of µm. They creep significantly morethan the piezoelectric scanners used in most commercial STM systems, but their large range is quite useful. Damage to the scanners or the associated wiring will appear as deviations from these capacitances. Small variations around these values are fine. After you are done testing these capacitances, reconnect them. Make sure you’re testing the scanner/cryostat side of the wiring, not the outputs of the box- this is a common silly mistake that can lead to unwarranted panic. If you’re working in Andrea Young’s lab, make sure the Z piezo is ungrounded . If for whatever reason current can flow through the circuit while you’re probing the capacitance, you will see the capacitance rise and then saturate above the range of the multimeter. -Because the nanoSQUID is a sharp piece of metal that will be in close contact with other pieces of metal, plastic flower bucket it sometimes makes sense to ground the nanoSQUID circuit to the top gate of a device, or metallic contacts to a crystal, to prevent electrostatic discharge while scanning or upon touchdown.

If you have decided to set up such a circuit, make sure that the sample, the gates, and the nanoSQUID circuit are all simultaneously grounded. If you forget to float one of these circuits and bias the SQUID or gate the device, you can accidentally pump destructive amounts of current through the nanoSQUID or device. However, you must make sure that the z piezoelectric scanner is not grounded. You can now begin your approach to the surface. You should ground the nanoSQUID and the device. Connect the coarse positioner control cable to the cryostat. If you are in Andrea’s lab, verify that the three high current DB-9 cables going from the coarse positioner controller box to the box-to-cable adapter are plugged in in the correct positions. The cables for each channel all have the same connectors, so it is possible to mix up the x, y, and z axes of the coarse positioners. This is a very destructive mistake, because you will not be advancing to the surface and will likely crash the nanoSQUID into a wirebond, or some other feature away from the device. The remaining instructions assume you are using the nanoSQUID control software developed in Andrea’s lab, primarily by Marec Serlin and Trevor Arp. The software is a complete and self-contained scanning probe microscopy control system and user interface based on Python 3and PyQT. Open the coarse positioner control module. Click the small capacitor symbol. You should hear a little click and see 200 nF next to the symbol . The system has sent a pulse of AC voltage to the coarse positioners; the click comes from the piezoelectric crystal moving in response. Check that you see a number around 1000 µm in the resistive encoder window for axis 3 . Note whether you see a number around 2000-3000 µm in the windows for axis 1 and axis 2. If you are in Andrea’s lab, it is possible that you will not for axis 2. Axis 2 has had problems with its resistive encoder calibration curve at low temperature.

The issue seems to be an inaccurate LUT file in the firmware; new firmware can be uploaded using Attocube’s Daisy software. It is not a significant issue if you cannot use the axis 1 and 2 resistive encoders; however, it is critical that there be an accurate number for axis 3. Set the output voltage frequency to be somewhere in the range 5-25 Hz . Set the output voltage to 50 V to start . Make sure that the check box next to Output is checked. Move 10 µm toward the sample . If Axis 3 doesn’t move, don’t panic! It’s usually the case that the coarse positioners are sticky after cooling down the probe before they’ve been used. Try moving backwards and forwards, then increase the voltage to 55 V, then 60 V. Once they’re moving, decrease the voltage back to 50 V. Note the PLL behavior- if there’s a software issue and pulses aren’t being sent, you won’t see activity in the PLL associated with the coarse positioners. Under normal circumstances you should see considerable crosstalk between the PLL and the coarse positioners while the coarse positioners are firing. There are significant transients in the resistive encoder readings after firing the coarse positioners; this is likely a result of heating, but could also have a contribution from mechanical settling and creep. We have observed that the decay times of transients are significantly longer in the 300 mK system than in the 1.5 K or 4 K systems, likely indicating that these transients are largely limited by heat dissipation, at least at very low temperatures. Go into the General Approach Settings of the Approach Control window. There’s a setting in there for coarse positioner step size- set that to 4 µm or so. This is the amount the coarse positioners will attempt to move between fine scanner extensions. They always overshoot this number . Overshooting is of course dangerous because it can produce crashes if it is too egregious. In the Approach Control window, click Set PLL Threshold, verify that standard deviation of frequency is 0.25 Hz. Enter 5 µm into the height window.

Verify that Z is ungrounded . Click Constant Height. Check that the PID is producing an approach speed of 100 nm/s. It is important that you sit and watch the first few rounds of coarse positioner approach. This is boring, but it is important the first few coarse positioning steps often cause the tuning fork to settle and change, which can cause the approach to accelerate or fail. Also by observing this part of the process you can often find simple, obvious issues that you’ve overlooked while setting up the approach. Getting to the surface will take several hours. Typically you’ll want to leave during this time. When you return, the tip should be at constant height. I’d recommend clicking constant height again and approaching to contact again to verify that you’re at the surface. You should be between 10 µm and 20 µm from the surface. It may be necessary to withdraw, approach with the coarse positioners a few µm, and then approach again to ensure you have enough scanner range in the z direction. Click withdraw until you’re fully withdrawn. Click Frustrate Feedback to enable scanning with tip withdrawn. I will present instructions as if you are attempting to navigate to a device through which you can flow current. This will generate gradients in temperature from dissipation and ambient magnetic fields through the Biot-Savart law, both of which the nanoSQUID sensor can detect. I strongly recommend that you navigate with thermal gradients if at all possible. The magnetic field is a signed quantity, so you need to have a pretty strong model and a clear picture of your starting location to successfully use it to navigate. Thermal gradients can be handled with simple gradient ascent; this will almost always lead you to the region of your circuit with the greatest resistance, which is typically an exfoliated heterostructure if that is what you’re studying. A pressure of a few mBar is plenty, flower buckets wholesale but be advised that this may require that you operate at elevated temperatures.Helium 4 has plenty of vapor pressure at 1.5 K, but this is not really an option at 300 mK, and many 300 mK systems struggle with stable operation at any temperature between 300 mK and 4 K. You should run an AC current through your device at finite frequency. Higher frequencies will generally improve the sensitivity of the nanoSQUID, but if the heterostructure has finite resistance the impedance of the device might prevent operation at very high frequency. It’s worth mentioning that the ‘circuit’ you have made has some extremely nonstandard ‘circuit elements’ in it, because it relies on heat conduction and convection from the device through the helium atmosphere to the nanoSQUID. If you don’t know how to compute the frequency-dependent impedance of heat flow through gaseous helium at 1.5K, then that’s fine, because I don’t either! I only mention it because it’s important to keep in mind that just because your electrical circuit isn’t encountering large phase shifts and high impedance, doesn’t mean the thermal signal is getting to your nanoSQUID without significant impedance.

I recommend operating at a relatively low frequency for these reasons, as long as the noise floor is tolerable. In practice this generally means a few kHz. I’d also like to point out that if you are applying a current to your device at a frequency ω, then generally the dominant component of the thermal signal detected by the nanoSQUID will be at 2 · ω, because dissipation is symmetric in current direction . Next you will perform your first thermal scan, 10-20 µm above the surface near your first touchdown point. If you have performed a thermal characterization, then pick a region with high thermal sensitivity, but generally this is unnecessary- I usually simply attempt to thermally navigate with a point that has good magnetic sensitivity. Bias the SQUID to a region with good sensitivity. Check the transfer function. Set the second oscillator on the Zurich to a frequency that is low noise . Connect the second output of the Zurich to the trigger of one of the transport lock-ins and trigger the transport lock-in off of it. Trigger the second transport lock-in off of the first one. Attach the output of one of the lock-ins to the 1/10 voltage divider, then to a contact of the sample. Attach the current input of one of the lock-ins to another contact as the drain. You can attach the voltage contacts somewhere if you want to, this is not particularly important though. It may be necessary to a apply a voltage to the gates, especially if you are working with semiconducting materials, like the transition metal dichalcogenides.There are a lot of issues that can affect scanning, and it isn’t really possible to cover all of themin this document, so you will have to rely on accumulated experience. Some problems will become obvious if you just sit and think about them- for example, if the thermal gradient is precisely along the x-axis and coarse positioner navigation is failing to find a strong local maximum it likely means that the y-axis scanner is disconnected or damaged. In Andrea’s lab, the basic circuits on the 1.5K and 300 mK systems as currently set up should be pretty close to working, so if there’s a problem I’d recommend observing the relevant circuits and thinking about the situation for at least a few minutes before making big changes. The scanners as currently installed on the 1.5K system do not constitute a healthy right-handed coordinate system, so to navigate you will need a lookup table translating scanner axes into coarse positioner axes. I think this issue is resolved on the 300 mK system, but this is the kind of thing that can get scrambled by upgrades and repair campaigns. In all of our note taking Power points and EndNotes, we have a little blue matrix that relates the scan axes to the coarse positioner axes. Use this to determine and write down the direction you need to move in the coarse positioner axes in your notes. You now have an initial direction in which you can start travelling.

Chern insulators are characterized by a single integer known as the Chern number

The type of magnet proposed here does not invoke spin-orbit coupling; in fact, it does not even invoke spin. Instead, the two symmetry-broken states are themselves electronic bands that live on the crystal, and they differ from each other in both momentum space and real space. For this reason, orbital magnetism does not need spin-orbit coupling to support hysteresis, and it can couple to a much wider variety of physical phenomena than spin magnetism can- indeed, anything that affects the electronic band structure or real space wave function is fair game. For this reason we can expect to encounter many of the phenomena we normally associate with spin-orbit coupling in orbital magnets that do not possess it. I would also like to talk briefly about magnetic moments. It has already been said that magnetic moments in orbital magnets come from center-of-mass angular momentum of electrons, which makes them in some ways simpler and less mysterious than magnetic moments derived from electron spin. However, I didn’t tell you how to compute the angular momentum of an electronic band, only that it can be done. It is a somewhat more involved process to do at any level of generality than I’m willing to attempt here- it is described briefly in a later chapter- but suffice to say that it depends on details of band structure and interaction effects, which themselves depend on electron density and, in two dimensional materials, ambient conditions like displacement field. For this reason we can expect the magnitude of the magnetic moment of the valley degree of freedom to be much more sensitive to variables we can control than the magnetic moment of the electron spin, plastic pot manufacturers which is almost always close to 1 µB. In particular, the magnetization of an orbital magnet can be vanishingly small, or it can increase far above the maximum possible magnetization of a spin ferromagnet of 1 µB per electron.

Under a very limited and specific set of conditions we can precisely calculate the contribution of the orbital magnetic moment to the magnetization, and that will be discussed in detail later as well. Finally, I want to talk briefly about coercive fields. The more perceptive readers may have already noticed that we have broken the argument we used to understand magnetic inversion in spin magnets. The valley degree of freedom is a pair of electronic bands, and is thus bound to the two dimensional crystalline lattice- there is no sense in which we can continuously cant it into the plane while performing magnetic inversion. But of course, we have to expect that it is possible to apply a large magnetic field, couple to the magnetic moment of the valley µ, and eventually reach an energyµ · BC = EI at which magnetic inversion occurs. But what can we use for the Ising anisotropy energy EI ? It turns out that this model survives in the sense that we can make up a constant for EI and use it to understand some basic features of the coercive fields of orbital magnets, but where EI comes from in these systems remains somewhat mysterious. It is likely that it represents the difference in energy between the valley polarized ground state and some minimal-energy path through the spin and valley degenerate subspace, involving hybridized or intervalley coherent states in the intermediate regime. But we don’t need to understand this aspect of the model to draw some useful insights from it, as we will see later.Real magnets are composed of constituent magnetic moments that can be modelled as infinitesimal circulating currents, or charges with finite angular momentum. It can be shown that the magnetic fields generated by the sum total of a uniform two dimensional distribution of these circulating currents- i.e., by a region of uniform magnetization- is precisely equivalent to the magnetic field generated by the current travelling around the edge of that two dimensional uniformly magnetized region through the Biot-Savart law. It turns out that this analogy is complete; it is also the case that a two dimensional region of uniform magnetization also experiences the same forces and torques in a magnetic field as an equivalent circulating current.

The converse is also true- circulating currents can be modelled as two dimensional regions of uniform magnetization. The two pictures in fact are precisely equivalent. This is illustrated in Fig. 2.9. It is possible to prove this rigorously, but I will not do so here. One can say that in general, every phenomenon that produces a chiral current can be equivalently understood as a magnetization. All of the physical phenomena are preserved, although they need to be relabeled: Chiral edge currents are uniform magnetizations, and bulk gradients in magnetization are variations in bulk current current density.In the same way that the Berry phase impacts the kinematics of free electrons moving through a two slit interferometer, Berry curvature impacts the kinematics of electrons moving through a crystal. You’ll often hear people describe Berry curvature as a ‘magnetic field in momentum space.’ You already know how electrons with finite velocity in an ambient magnetic field acquire momentum transverse to their current momentum vector. We call this the Lorentz force. Well, electrons with finite momentum in ‘ambient Berry curvature’ acquire momentum transverse to their current momentum vector. The difference is that magnetic fields vary in real space, and we like to look at maps of their real space distribution. Magnetic fields do not ‘vary in momentum space,’ at nonrelativistic velocities they are strictly functions of position, not of momentum. Berry curvature does not vary in real space within a crystal. It does, however, vary in momentum space; it is strictly a function of momentum within a band. And of course Berry curvature impacts the kinematics of electrons in crystals. Condensed matter physicists love to say that particular phenomena are ‘quantum mechanical’ in nature. Of course this is a rather poorly-defined description of a phenomenon; all phenomena in condensed matter depend on quantum mechanics at some level. Sometimes this means that a phenomenon relies on the existence of a discrete spectrum of energy eigenstates.

At other times it means that the phenomenon relies on the existence of the mysterious internal degree of freedom wave functions are known to have: the quantum phase. I hope it is clear that Berry curvature and all its associated phenomena are the latter kind of quantum mechanical effect. Berry curvature comes from the evolution of an electron’s quantum phase through the Brillouin zone of a crystal in momentum space. It impacts the kinematics of electrons for the same reason it impacts interferometry experiments on free electrons; the quantum phase has gauge freedom and is thus usually safely neglected, but relative quantum phase does not, so whenever coherent wave functions are being interfered with each other, scattered off each other, or made to match boundary conditions in a ‘standing wave,’ as in a crystal, we can expect the kinematics of electrons to be affected. We will shortly encounter a variety of surprising and fascinating consequences of the presence of this new property of a crystal. Berry curvature is not present in every crystal- in some crystals there exist symmetries that prevent it from arising- but it is very common, and many materials with which the reader is likely familiar have substantial Berry curvature, including transition metal magnets, black plastic plant pots wholesale many III-V semiconductors, and many elemental heavy metals. It is a property of bands in every number of dimensions, although the consequences of finite Berry curvature vary dramatically for systems with different numbers of dimensions. A plot of the Berry curvature in face-centered cubic iron is presented in the following reference: [84, 90]. We will not be discussing this material in any amount of detail,the only point I’d like you to take away from it is that Berry curvature is really quite common. For reasons that have already been extensively discussed, we will focus on Berry curvature in two dimensional systems.Several chapters of this thesis focus on the properties of a particular class of magnetic insulator that can exist in two dimensional crystals. These materials share many of the same properties with the magnetic insulators described in Chapter 2. They can have finite magnetization at zero field, and this property is often accompanied by magnetic hysteresis. The spectrum of quantum states available in the bulk of the crystal is gapped, and as a result they are bulk electrical and thermal insulators. They have magnetic domain walls that can move around in response to the application of an external magnetic field, or alternatively be pinned to structural disorder. And of course they emit magnetic fields which can be detected by magnetometers.Unlike all trivial insulators and, in particular, trivial magnetic insulators, these magnetic insulators support a continuous spectrum of quantum states within the gap, with the significant caveat that these states are highly localized to the edges of the two dimensional crystalline magnet .

This is the primary consequence of a non-zero Chern number. These quantum states are often referred to as ‘edge states’ or ‘chiral edge states,’ and they have a set of properties that are reasonably easy to demonstrate theoretically. I will describe the origin of these basic properties only qualitatively here; a deep theoretical understanding of their origin is not important for understanding this work, so long as the reader is willing to accept that the presence of these quantum states is a simple consequence of the quantized total Berry curvature of the set of filled bands. Many more details are available in [84]. These materials are known collectively as Chern insulators, magnetic Chern insulators, or Chern magnets. They are, as mentioned, restricted to two dimensional crystals; three dimensional analogues exist but have significantly different properties. The vast majority of this thesis will be spent exploring deeper consequences and subtle but significant implications of the presence of these states. We will start, however, with a discussion of the most basic properties of chiral edge states. Astute readers may have already noticed that all real materials have many electronic bands, and every band has its own Berry curvature Ωn, so the definition provided in equation 3.4 seems to assign a Chern number to each of the bands in a material, not to the material itself. The properties of a particular two dimensional crystal are determined by the total Chern number of the set of filled bands within that crystal, obtained by adding up the Chern numbers of each of its filled bands. The total Chern number determines the number of edge states available at the Fermi level within the gap.In the absence of spin-orbit coupling, every band comes with a twofold degeneracy generated by the spin degree of freedom. Every band can be populated either by a spin up or a spin down electron, and as a result every Bloch state is really a twofold degenerate Bloch state. Adding spinorbit coupling may mix these states but does not break this twofold degeneracy. An important property of the Chern number is that Kramers’ pairs must have opposite-signed Chern numbers equal in magnitude. This is a direct consequence of similar restrictions on Berry curvature within bands. For a magnetic insulator the set of filled bands is a spontaneously broken symmetry, with the system’s conduction and valence bands hysteretically swapping two members of a Kramers’ pair in response to excursions in magnetic field. These two facts together imply that magnetic hysteresis loops of Chern magnets generally produce hysteresis in the total Chern number of the filled bands, precisely following hysteresis in the magnetization of the two dimensional crystal. This hysteresis loop switches the total Chern number of the filled bands between positive and negative integers of equal magnitude. These facts also imply that finite Chern numbers cannot exist in these kinds of systems without magnetism- if both members of a Kramers’ pair are occupied, the system will have a total Chern number of zero.As discussed previously, additional symmetries of the crystalline lattice itself can produce additional degeneracies that can support spontaneous symmetry breaking and magnetism. In most cases similar rules apply to the Chern numbers of these magnets. We will have a lot more to say about the Chern numbers associated with the valley degree of freedom in graphene.