Tag Archives: agriculture

The chirality reversal field can be almost halved when a short-pulsed field is applied

The analysis indicates that such a large reservoir acts as a potential evaporating surface that decreases the local surface temperature, and cools the entire atmospheric column, decreasing upward motion, resulting in sinking air. This sinking air mass causes low level moisture divergence, decreases cloudiness, and increases net downward radiation, which tends to increase the surface temperature. However, the evaporative cooling dominates radiative heating, and resulting in a net decrease in surface and 2 m air temperature. The strong evaporation pumps moisture into the atmosphere, which suggests an increase in precipitation, but the moisture divergence moves this away from the TGD region with no net change in precipitation. The two processes, increased latent heating with surface cooling, and decreased cloudiness with increased downward solar radiation, are opposing feed backs that are dominated here by the area-mean surface cooling effect. It is not clear if this holds true for other times of the year when the mean Tmax is lower and cloudiness may be higher. Furthermore, the impacts on the local monsoon flow, precipitation intensity, and frequency, have not been studied in this initial investigation. However, these relative changes are significant and will likely have an impact on local ecosystems, agriculture, energy, and the population. Simulations at 10km are not sufficiently fine enough to determine the full extent of this sensitivity and, hence, 1 km multi-year simulations will be needed. Amagnetic vortex state1,2 is a ground state of a magnetic nanostructure that consists of a perpendicularly magnetized core and in-plane curling magnetizations around the core . Because of its importance in fundamental physics, research on the vortex state is an important emerging topic in magnetism studies and it has a high potential for application in high-density data storage devices. A magnetic vortex state is energetically fourfold degenerated, which is determined by its polarity and chirality,vertical grow where the polarity refers to the perpendicular direction of the core magnetization, pcore and the chirality,c, refers to the curling direction of the in-plane magnetization .

Obviously, the success of a magnetic vortex device will critically depend on the question of how to control the vortex polarity and chirality effectively. Much effort has been invested recently in developing various methods for reversing the vortex polarity and chirality with a low magnetic field. While the chirality can be reversed easily with a weak field of ,50 mT , the magnetic field required to reverse the vortex core is on the order of 500 mT, which is too large for practical use in device applications. To reduce the vortex core-reversal field, an alternative approach used a dynamic field. A promising result is also reported for an AC oscillating magnetic field set at the vortex resonance frequency, so that the vortex excitation could assist its polarity reversal. A representative example of such an approach is the vortex gyration excitation, in which the vortex core exhibits a spiral motion as an AC magnetic field is tuned on at the gyration eigen frequency. Core switching occurs subsequently through vortex–antivortex creation and annihilation6 as the core’s moving speed exceeds a critical value. The core reversal field can be reduced in such a manner to values far below 10 mT . However, this method contains a fundamental problem for applications. After the core reversal and turning off the field, the core gyration exponentially decays to its initial position. The decay radius is comparable to the lateral size of the sample and the relaxation takes a few hundred nanoseconds. This is a severe obstacle to reading the polarity. Recently, Wang and Dong and Yoo et al. found a new method of vortex core flip from numerical simulation. They demonstrated that the vortex core polarity could be switched in a radial excitation mode by a perpendicular AC magnetic field. In contrast to the gyration mode-assisted switching, which involves the vortex core motion, the radial mode-assisted core switching involves only axial symmetric oscillations, thus preserving the vortex core position. Obviously, the radial mode-assisted core switching has a completely different mechanism from the gyration mode-assisted core switching. The underlying mechanism of the radial mode-assisted core switching was not clearly shown by the simulation.

The critical field obtained by the radial mode in these studies is of the order of 20 mT , larger than the gyration mode-assisted core reversal. In this work, we studied the underlying mechanism of the radial mode oscillation and outlined a new pathway to reduce the core switching field further down to the mT range, which was more comparable to the critical field of the gyration-assisted core switching. In addition to micro-magnetic simulations, we also established a dynamical equation for the radial mode oscillation from the Landau–Lifshitz–Gilbert equation. This equation clearly explores the nonlinear behavior of the radial mode and the critical field reduction. For direct comparison of the critical field reduction, the simulation structure was set as described by Yoo et al.. According to previous studies, the radial modes are classified by the node number n . The first mode has one node, the vortex core, which means that the magnetization does not oscillate temporally at the vortex core, but the other parts almost uniformly oscillate. The second mode has two nodes; one is the vortex core and the other a concentric circle. Yoo et al. studied the resonance frequency of the individual radial mode and obtained the eigen frequencies with the same sample structure as in this study: 10.7 GHz for the first mode , 15.2 GHz for the second mode , and 20.7 GHz for the third mode . They also showed the vortex core polarity reversal using the first mode with an oscillating external field of 20 mT. To reduce the radial mode-induced critical field below 10 mT, we stimulated the first mode of the radial oscillation with a different method; that is, sweeping of the external field frequency. The field was sinusoidal with amplitude of 9 mT and the field frequency f was slowly varied from 14.0 to 6.0 GHz over 40 ns. Figure 1b shows the magnetization oscillation during frequency sweeping with time. The normalized magnetization along the thickness direction mz and the external magnetic field, Hz, were plotted together. The term ,mz. means the spatial average over the entire disk. The magnetization oscillation has the same frequency despite the phase difference. From this oscillation, we can get the oscillation amplitude of magnetization, Iz, in the thickness direction, which is half the difference between the nearest maximum and minimum values of the ,mz. oscillation.

After reaching an external field frequency of 6.0 GHz, the frequency sweeping direction was reversed and f returned to 14.0 GHz. In Fig. 1c, Iz is shown as a function of f. It is interesting to note that an external field of 9 mT can reverse the vortex core polarity. In downward sweeping of the frequency,farming vertical the almost uniform magnetization oscillation was observed on the disk except for the core conserving its width . This uniform oscillation was maintained before Iz reached the maximum amplitude of 0.28 when f was 8.7 GHz. After reaching this critical amplitude, the uniform oscillation collapsed and converged into the disk center that generated a breathing motion of the core. Such a breathing generated a strong exchange field when the core was compressed, and then core polarization switching occurred. Amplitude fluctuations near 8.5 GHz and 10.5 GHz are transition effects discussed below. In contrast to downward sweeping, the upward frequency sweeping did not reach the amplitude of 0.28, so the vortex maintained its polarity. This means that one cycle of frequency sweeping generated one core reversal. It is notable that the amplitude obtained with the fixed-field frequencies was the same as the upward sweeping. The fixed-field frequency amplitudes were determined by amplitude saturation after turning on the external oscillating field. To reverse the core polarity with the upward sweeping oscillation and fixed frequency oscillation, a larger field was required for achieving the sufficient oscillation amplitude. From this sweeping frequency simulation, it was verified that the critical field was reduced to below 10 mT and this reduction was only observed in downward sweeping because of the hysteresis behavior of the frequency.We tested the scalability of the radial mode-induced core reversal. When the radius of the disk was 120 nm, the critical field obtained by the frequency sweeping method was 9.3 mT. The core of a disk with radius 250 nm reverses its polarity with 12 mT external field. Increasing the radius, the critical field also increases. This scalability is an important property for developing data storage devices. Contrary to the radial mode-induced polarity switching, the critical field with the gyration-induced polarity switching exhibits inverse radius dependence19 as well as the chirality reversal13. Finally, we point out the chaotic behavior and the phase commensurability in the radial mode oscillation for further studies. PetitWatelot et al. observed the chaos and phase-locking phenomenon in the vortex gyration with the core reversal31. We observed similar behavior in radial mode oscillation. It is expected that a nonlinear oscillator with a sufficiently large driving force would exhibit chaotic motion. We confirmed this chaotic behavior in the radial mode of the vortex. When the oscillating field strength was smaller than Hc, a plot of the variable with respect to its time derivative, for example v _mzw versus ,mz., showed a circular trajectory. But when the field was larger than Hc, this plot becomes complex in the phase space, which manifests its chaotic behavior. Figure 5 shows examples of the chaos in the radial mode. The frequency was fixed at 13.5 GHz. When H 5 60 mT , Hc , it showed a closed circular trajectory, but when H 5 90 mT . Hc the trajectory was not closed . Further increases in the field resulted in closed trajectories. However, the trajectories were not a simple circle. To close the trajectory, 14 cycles of field oscillation are needed and during these 14 cycles, the core reversed four times. In the case of H 5 120 mT, core reversal occurred twice in five field oscillations , implying that the core reversal rate was related to the chaotic behavior.

Thus, to describe the radial mode of vortex including its chaotic behavior, the core polarity-related term32 is needed in the motion equation. In summary, we studied the nonlinear resonance of the radial mode of the vortex and found that the oscillation mode corresponding to the Duffing-type nonlinear oscillator exhibited a hysteresis behavior with respect to the external field frequency. Through the hysteresis effect, we can achieve hidden amplitude that is almost double that obtained with fixed field frequency and this amplitude multiplication effect reduces the critical field below 10 mT. In addition, we pointed out the chaotic behavior of the radial mode for further studies. We think that to complete the study on vortex dynamics, it is timely to start research on the nonlinear behavior in radial modes, as well as in other oscillations of the magnetic vortex.Targeted protein degradation has emerged over the last two decades as a promising therapeutic strategy with advantages over conventional inhibition.Unlike inhibitors, which operate through occupancy-driven pharmacology, degraders can enable catalytic and durable knockdown of protein levels using event-driven pharmacology. Most degrader technologies, such as proteolysis targeting chimeras and immunomodulatory imide drugs, co-opt the ubiquitin proteasome system to degrade traditionally challenging proteins. Intracellular small molecule degraders have demonstrated success in targeting over 60 proteins and several are currently being tried in the clinic.However, due to their intracellular mechanism of action, these approaches are limited to targeting proteins with ligandable cytosolic domains. To expand targeted degradation to the cell surface and extracellular proteome, two recent lysosomal degradation platforms have been developed. One, lysosome targeting chimeras , utilizes IgG-glycan bioconjugates to co-opt lysosome shuttling receptors.LYTAC production requires complex chemical synthesis and in vitro bioconjugation of large glycans which are preferentially cleared in the liver, limiting the applicability of this platform. A second extracellular degradation platform, called antibody-based PROTACs , utilizes bispecific IgGs to hijack cell surface E3 ligases.Due to the dependence on intracellular ubiquitin transfer, AbTACs are limited to targeting cell surface proteins, leaving the secreted proteome undruggable. Thus, there remains a critical need to develop additional degradation technologies for extracellular proteins. Here, we have developed a novel targeted degradation platform, termed cytokine receptor targeting chimeras .

The debate on the scale range of the COI demonstrates how vague and imprecise the concept really is

Winburn and Wagner acknowledged that COIs can be equated with counties but also, and potentially even more significantly, with cities and neighborhoods . Lastly, Stephanopoulos added that “communities exist, and should be represented in the legislature, at different levels of generality,” and that more specific communities can form smaller-scale districts and broader ones can be captured by larger-scale districts like the congressional type . Thus this camp answers that the COI can take a wide range of scales. The opposing camp, however, has doubted that COIs can exist at certain scales. Chambers and Monmonier were skeptical that they hold at the smaller scales, suggesting that they are larger than neighborhoods. Chambers believed that such communities have to be large in order to command a majority in a district, but he was focusing on those relevant to the congressional type, which are almost always far larger than neighborhoods . Monmonier based his case on the improved transport and communication links that have allowed communities to form that are more fragmented and extend beyond one’s residential proximity . Gardner had trouble with the idea that there could be COIs at the larger scales, musing that a congressional district of half a million or more people could hardly be deemed a single, coherent community. May and Moncrief , in their commentary on districts in the Western United States, similarly questioned whether a meaningful COI could be tied to one of the sprawling districts in rural desert environments , though Steen suggested that the fact that such districts are so rural is enough to distinguish them as salient communities . In sum, this camp retorts that the COI exists only at a narrow range of scales, and cannot be applied at the largest and smallest ends of the scale spectrum. The frequent references to the neighborhood in this literature on COIs raise the question of how related the two concepts are. These appear to be similar or at least related concepts, especially when one is focusing on the cognitive COI. But this relationship only seems to apply at a particular scale of COI; a large-scale COI made up of multiple counties is obviously not comparable to a neighborhood. Of course, one must first define what exactly a neighborhood is,vertical gardening in greenhouse which is itself an interesting and rich topic that has been approached in various ways. Scholars have given definitions deriving from more socioeconomic or demographic approaches to more cognitive ones .

The latter study adopted a cognitive approach by asking residents to indicate where they believe the boundaries of the Koreatown neighborhood to be. If one can define and identify a certain neighborhood as a region, either thematic or cognitive, one can then determine how well it corresponds to a particular scale of COI, whether the two greatly overlap or are even identical. COIs may well exist at different scales, but they are different varieties of COI, with different meanings for residents. One can discover the nature of each scale of COI by recognizing it as a cognitive region. Conceptualizing COIs as cognitive regions offers the greatest potential to discover their meaningful extents, precisely because meaning is a cognitive construct. In this research, I pursue this by soliciting people’s beliefs about the extent of their COI, giving them the freedom to make it as big or small as they choose. Such a survey can reveal the scales people most commonly use to think of COIs, thereby identifying as precisely as possible a range of scales for these cognitive regions. One can also conceive of a scale of “sense of place” by which people have different levels or types of place attachment at different scales. For example, an individual might identify very strongly with his or her city, but feel little connection to one’s county. Similarly, some people might identify more with their state than their country, while others might feel the opposite. One can even possess a strong “sense of place” at multiple scales simultaneously. Shamai demonstrated this in a study with Canadian students, finding that they held “nested allegiances” for three different levels of place: country, province, and metropolitan area. However, these students did not feel an equal degree of attachment toward each of these three scales. Rather, they felt a stronger sense of place toward their metropolitan area, followed by their country, and lastly their province. These findings have implications for COI research, because if people can identify with multiple levels of place simultaneously, they can certainly identify with multiple COIs while feeling different levels of attachment toward each. In addition to the COI criterion, the need to respect the boundaries of already existing administrative regions has long been recognized as an important objective for good redistricting . The requirement is currently used in places ranging from Japan to the United Kingdom to California .

While respecting clearly-bounded administrative regions is easier to interpret than respecting the more vaguely-bounded COIs, the two criteria may in fact be closely related. Counties and cities are often considered to be “vital, legal, and familiar communities of interest” . The residents of such jurisdictions “share a history and collective sense of identity” that help foster a genuine sense of community . Gardner contended that genuine communities arise where relevant ties form, but those bonds last only in jurisdictions with fixed boundaries. He argued furthermore that “common residency in a working, functioning, self-governing locality by itself can give rise to a political and administrative community of interest entitled to recognition. As the Colorado Supreme Court recently observed, ‘counties and the cities within their boundaries are already established as communities of interest in their own right, with a functioning legal and physical local government identity on behalf of citizens that is ongoing’” . Winburn and Wagner likewise identified counties as important COIs in the redistricting context, in large part because they play such a critical role in the electoral process, from registering voters to mailing election information to administering polling places . Bowen made a similar case with cities, as “residents of the same city share much in common—the same taxation levels, the same public problems, and the same municipal government” . These findings suggest that administrative regions may well contribute to the emergence of COIs as cognitive regions, and that the boundaries of the former may also serve as the boundaries of the latter. However, some scholars have cautioned against completely equating administrative regions with COIs. Winburn and Wagner recognized that “counties are [not] the only, or even always the most relevant, political community of interest for a citizen” . Stephanopoulos argued that the two are often different, as when interests and affiliations do not follow administrative boundaries, or when administrative regions contain multiple communities or only parts of communities. He did concede, however, that “the two may sometimes be functionally identical, both because [administrative regions] tend to be inhabited by people with similar socioeconomic characteristics, and because civic ties can foster a sense of kinship” . The consensus appears to be that administrative regions are at the very least useful proxies for COIs, if not in some sense meaningful communities themselves. Whether this is more the case for counties or cities likely depends on locational context; counties are probably more meaningful entities in rural areas than in urban areas.

My dissertation seeks to investigate the effect of both scale and administrative regions on people’s conceptions of their COI. I do so by conducting two studies. The first study seeks to determine the effects of three factors on the cognitive COIs that survey respondents depict. Those factors are the extent of the map given to survey respondents, whether the boundaries of administrative regions are shown to them on the map, and whether they live in an urban or rural locale. This study is an experimental survey of residents of an urban study area and a rural study area, greenhouse vertical farming with the manipulated variable being the type of map that residents receive. There are six types of map, because there are three possible map extents with versions that have and lack boundaries. Participants of this first study respond by drawing freehand on the map three different areas representing their COI, one being the area that is definitely within their COI, another being the area that is probably within their COI, and the last one being the area that is possibly within their COI. Requiring a series of drawings enables me to achieve a secondary aim of this study—examining variation within respondents’ cognitive COIs by having them depict different levels of confidence, in the same vein as Montello et al. . Another secondary aim is to explore how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. The second study seeks to determine the extent of the cognitive COIs that survey respondents depict, when given free rein to make their region as large or small as they want. Participants respond to this second study by ranking predefined administrative regions on the map according to how confident they are that a given area is within their COI. They do so at three different map scales—one showing large-sized areas , one showing medium-sized areas , and one showing small-sized areas . Respondents also indicate how much they identify with the COI they define at each scale, on a five-point rating scale. This enables me to achieve a secondary aim of this study— investigating whether respondents identify with multiple nested COIs at different scales, and if they do, which ones they identify with the most. Like the first study, my second study achieves the additional secondary aim of exploring how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. Both studies together allow me to determine whether COIs exist as cognitive regions at multiple scales. If they do, then I can describe the nature of these regions at those different scales, particularly whether they reflect local districts, counties, and cities.Focal therapy has the potential to improve management of prostate cancer , by reducing side effects associated with radical treatment. While the safety and feasibility of FT strategies have been reported using cryoablation,focal laser ablation ,and high intensity focused ultrasound ,long-term oncologic efficacy is unknown. A critical barrier to robust testing of FT strategies is appropriate patient selection criteria, which are not clearly established.A recent FDA-AUA-SUO workshop on partial gland ablation highlighted this challenge, noting that “some [authors] regard [partial gland ablation] as an alternative to AS for low-risk cancers, whereas others view it as an alternative to radical therapy for selected, higher risk cancers.”Regardless of approach, there is broad agreement on the importance of assessment for FT using multi-parametric MRI followed by targeted biopsy.To clarify the impact of different patient selection criteria on FT eligibility, we retrospectively studied men who have received MRI/Ultrasound fusion biopsy, incorporating both targeted and template biopsies. To confirm biopsy findings and to derive the accuracy of fusion biopsy in FT eligibility, we examined whole-organ concordance of eligibility assessment in a subset of patients who underwent radical prostatectomy.All men undergoing MRI/US fusion biopsy at UCLA between January 2010 and January 2016 were retrospectively screened for a suspicious lesion identified on mpMRI , which was found to contain CaP upon targeted biopsy . FT eligibility criteria, based on the NCCN intermediate-risk definition8 and recent consensus guidelines,were applied . Figure 2 shows histological profiles for FT eligible patients based on biopsy. Three different patterns of CaP are shown, each suitable for treatment by hemi-gland ablation or less. Men with bio-psynegative ROIs were considered ineligible for FT. Similarly, men without csCaP < 4mm were also considered ineligible , regardless of the number of positive cores. All collection of clinical data was performed prospectively within a UCLA IRB-approved registry. The fusion biopsy method, which has been previously described, was unchanged throughout the study period.Briefly, within 2 months of biopsy, patients underwent a 3T mpMRI with body coil. MRI interpretation was conducted under the direction of a dedicated uroradiologist , and suspicious lesions were assessed according to UCLA and Prostate Imaging-Reporting and Data System criteria.MRI assessment was based onthe UCLA assessment system,which pre-dates PI-RADS v1, and after PI-RADS v2 was established, by both systems using highest suspicion category found. At biopsy, images were registered and fused with real-time transrectal ultrasound to generate a 3D image of the prostate with delineated ROIs.

These diverse priorities will place important constraints on animal agriculture in the coming decades

Although the detailed reaction mechanism has not yet been identified, discovery of this distinct function of a methane-producing PLP-dependent enzyme could presage a breakthrough in the practical application of methanotrophs. Diversifying genetic regulatory modules can allow delicate control of synthetic pathways that are activated on demand according to host plant physiology. Fascinating potential targets for dynamic regulation are small molecules involved in plant–microbe interactions and plant stress response. Ryu et al. recently constructed biosensors for natural and non-natural signaling molecules that enabled control of N fixation in various microbes. More recently, Herud-Sikimić et al. engineered an E. coli Trp repressor to a FRET-based auxin biosensor that undergoes conformational change in the presence of auxin-related molecules but not L-tryptophan Because the conformational change induced by L-tryptophan is a core function in the Trp operon, the engineered Trp repressor may allow auxin-dependent biosynthesis. Developing dynamic regulatory circuits for controlling expression of PGP traits may help maintain the viability of engineered host microbes in pre-existing microbiomes and thereby facilitate their potential contributions to sustainable agriculture. In nature, plants interact with multiple PGPRs whose properties may work cooperatively to provide benefits. For example, Kumar et al. observed synergistic effects of ACC deaminase- and siderophore-producing PGPRs that enhanced sunflower growth. This result implies that layering PGP traits in a host strain under single or multiple regulatory circuits may maximize their advantages. Furthermore, microbiome engineering inspired by native PGPR colonization, for example,through siderophore-utilizing ability,dutch bucket for sale may open a new era for sustainable agriculture via customized PGPR consortia. Agricultural science has been enormously successful in providing an inexpensive supply of high-quality and safe foods to developed and developing nations. These advancements have largely come from the implementation of technologies that focus on efficient production and distribution systems as well as selective breeding and genetic improvement of cultured plants and animals.

Although population growth in developed nations has reached a plateau, no slowdown is predicted in the developing world until about 2050, when the population of the world is expected to reach 9 billion . To meet the global food demand will require nearly double the current agricultural output, and 70% of that increased output must come from existing or new technologies . The global demand for animal products is also substantially growing, driven by a combination of population growth, urbanization, and rising incomes. However, at present, nearly 1 billion people are malnourished . Animal products contain concentrated sources of protein, which have AA compositions that complement those of cereal and other vegetable proteins, and contribute calcium, iron, zinc, and several B group vitamins. In developing countries where diets are based on cereals or bulky root crops, eggs, meat, and milk are critical for supplying energy in the form of fats. In addition, animal-derived foods contain compounds that actively promote long-term health, including bio-active compounds such as taurine, l-carnitine, creatine, and endogenous antioxidants such as carosine and anserine . Furthermore, those foods are a rich source of CLA, forms of which have anti-cancer properties , reduce the risk of cardiovascular disease , and help fight inflammation .Animal production will play a pivotal role in meeting the growing need for high-quality protein that will advance human health. Our technological prowess will be put to the test as we respond to a changing world and increasingly diverse stakeholders. Intensifying food production likely will be confounded by declining feed stock yields due to global climate change, natural resource depletion, and an increasing demand for limited water and land resources . Additionally, whereas the moral imperative to feed the malnourished people of the world is unequivocal, a well-fed, well-educated, and vocal citizenry in developed nations places a much greater emphasis on the environmental sustainability of production, the safety of food products, and animal welfare, often without regard for impact on the cost of the food. Despite these daunting challenges, the sheer magnitude of potential human suffering calls on us to assume the reins from our recently lost colleague, Norman Borlaug, to harness technological innovation within our disciplines to keep world poverty, hunger, and malnutrition at bay.

As was the case during the Green Revolution, advancements in genetics and breeding will provide a wellspring for a needed revolution in animal agriculture. Indeed, we have entered the era of the genome for most agricultural animal species. Genetic blueprints position us to refine our grasp of the relationships between genotype and phenotype and to understand the function of genes and their networks in regulating animal physiology. The tools are in hand for accelerating the improvement of agricultural animals to meet the demands of sustainability, increased productivity, and enhancement of animal welfare .The goals of animal genetic improvement are firmly grounded in the paradigm of animal production, which naturally refers to concepts of efficiency, productivity, and quality. Sustainability and animal welfare are central considerations in this paradigm; an inescapable principle is that the maximization of productivity cannot be accomplished without minimizing the levels of animal stress. Furthermore, the definition of efficiency requires sustainability. Unnecessary compromises to animal well-being or sustainability are morally reprehensible and economically detrimental to consumers and producers alike. The vast majority of outcomes from genetic selection have been beneficial for animal well-being. Geneticists try to balance the enrichment of desirable alleles with the need to maintain diversity because they are keenly aware of the vulnerability of monoculture to disease. Genetic improvement programs must always conserve genetic diversity for future challenges, both as archived germplasm and as live animals . However, unanticipated phenotypes occasionally arise from genetic selection for 2 reasons. First, every individual carries deleterious alleles that are masked in the heterozygous state but can be uncovered by selective breeding. Second, the linear organization of chromosomes leads to certain genes being closely linked to each other on the DNA molecules that are transmitted between generations. Thus, blind selection for an allele that is beneficial to 1 trait also enriches for all alleles that are closely linked to it and either through pleiotropy or linkage disequilibrium, undesirable correlated responses in other traits may occur.

Geneticists are aware of this and closely monitor the health and well-being of populations that are under selection to ensure that any decrease in fitness is detected and that ameliorative actions are taken to correct problems either by the elimination of carriers from production populations, altering the selection objective to facilitate improvement in the affected fitness traits, or by introducing beneficial alleles by crossbreeding. Increasingly precise molecular tools now allow the rapid identification of genetic variants that cause single-gene defects and facilitate the development of DNA diagnostics to serve in genetic management plans that advance the production of healthy animals. Whole-genome genotyping with high-density, SNP assays will enable the rapid determination of the overall utility of parental lines in a manner that is easily incorporated into traditional quantitative genetic improvement programs . The approach is known as genomic selection and essentially allows an estimation of the genetic merit of an individual by adding together the positive or negative contributions of alleles across the genome that are responsible for the genetic influence on the trait of interest. Under GS,hydroponic net pots genetic improvement can be accelerated by reducing the need for performance testing and by permitting an estimation of the genetic merit of animals outside currently used pedigrees. Genomic selection also provides for development of genetic diagnostics using experimental populations, which may then be translated to commercial populations, allowing, for the first time, the opportunity to select for traits such as disease resistance and feed efficiency in extensively managed species such as cattle. The presence of genotype × environment interactions will also require the development of experimental populations replicated across differing environmental conditions to enable global translation of GS. The speed with which the performance of animals can be improved by GS is determined by generation interval, litter, or family size, the frequency of desirable alleles in a population , and the proximity on chromosomes of good and bad alleles. Although predicting genetic merit using DNA diagnostics may be less precise than directly testing the performance of every animal or their offspring, the reduction in generation interval by far offsets this. For example, in dairy populations, the rate of genetic improvement is expected to double with the application of GS . Preliminary results from the poultry industry suggest that GS focused on leg health in broilers and livability in layers can rapidly and effectively improve animal welfare . Although price constraints currently limit the widespread adoption of high-density SNP genotyping assays in livestock species, low-cost, reduced-subset assays containing the most predictive 384 to 3,000 SNP are under development in sheep, beef, and dairy cattle.

These low-cost assays are expected to be rapidly adopted and will be expanded in content as the price of genotyping declines. Animal selection based on GS is also expected to reduce the loss of genetic diversity that occurs in traditional pedigree-based breeding because the ability to obtain estimates of genetic merit directly from genotypes avoids the restriction of selection to the currently used parental lineages. Also, despite the increase in the rate of genetic improvement, selection for complex traits involving hundreds or thousands of genes will not result in the rapid fixation of desirable alleles at all of the underlying loci.Whereas GS will accelerate animal improvement in the post genomic era, parallel and overlapping efforts in animal improvement based on genome-informed genetic engineering must ensue to ensure that productivity increases at pace with the expanding world populations. The tools of functional genomics and the availability of genome sequences provide detailed information that can be used to engineer precise changes in traits, as well as monitor any adverse effects of such changes on the animal . These tools are also enabling a deeper understanding of gene function and the integration of gene networks into our understanding animal physiology . This understanding has begun to identify major effect genes and critical nodes in genetic networks as potential targets for GE.The genomics revolution has been accompanied by a renaissance in GE technologies. Novel genes can be introduced into a genome , and existing genes can either be inactivated or their expression tuned to desirable levels using recently developed RNA interference . The specificity and efficiency of these approaches is expected to continue to improve. The technical advancements in GE are so significant that Greger advocated that scrutiny of the procedures for generating transgenic farm animals is undeserved and that discussion should focus on the welfare implications of the desired outcome instead of unintended consequences of GE. This position is also reflected by the rigorous regulatory mechanism established by the FDA for premarket approval of GE animals , which considers the risks of a given product to the environment and the potential impact on the well-being of animals and consumers. Indeed, this review mechanism was recently adopted as an international guideline by Codex Alimentarious , which has already found GE to be a safe and reliable approach to the genetic improvement of food animals . In addition, guidelines that promote good animal welfare, enhance credibility, and comply with current regulatory requirements, for the development and use of GE animals have been developed as a stewardship guidance . The stewardship guidance assists the industry and academia in developing and adopting stewardship principles for conducting research and developing and commercializing safe and efficacious agricultural and biomedical products from GE animals for societal benefit.Both GS and GE are viable, long-term approaches to genetic improvement, but when should one approach be employed over the other? Genes are not all equal in their effects upon changes in phenotype. The products encoded by some genes have major effects on biochemical pathways that define important characteristics or reactions in an organism. Other genes have lesser, but sometimes still important, effects. In general, genetic modification by GE is used to add major-effect genes, whereas genetic selection is applied to all genes, including the far larger number of lesser-effect genes that appear to be responsible for about 70% of the genetic variation within a given trait . One of the most significant advantages of GE is the ability to introduce new alleles that do not currently exist within a population, in particular, where the allele substitution effect would be very large. This approach can include gene supplementation and genome editing, the latter enabling the precise transfer of an alternative allele without any other changes to the genome of an animal .

The most common application of forward osmosis treatment methods is seawater desalination

The forward osmosis desalination process usually includes osmotic dilution of draw solution and freshwater production from diluted draw solution. There are two types of forward osmosis desalination based on the different water production methods. One applies heat sinking draw solution that broke down into volatile gases , these gases could also be recycled during the thermal decomposition and generate high osmotic pressure. The other is used as filtration or dilution of water. For instance, the combination of reverse osmosis and forward osmosis could be used for drinking water treatment or brine removal, forward osmosis could also be a fully or partly replacement of ultrafiltration under certain circumstances. Recent studies in materials science also proved that forward osmosis could be used to control drug release in the human body, it could also control the food concertation in the production phase. Regarding the semi-permeable membrane used in Forward osmosis, the tubular membrane is more functional for many reasons. The tubular membrane is one of the membranes that allow solution flows bidirectionally of the membrane, it maintains high hydraulic pressure without deformation due to the self-supported feature, it is also easier to fabricate while retaining high flexibility and density. Although there is a substantial amount of energy required to treat seawater using Forward Osmosis technology, its potential has been demonstrated through bench-scale experiments,fodder system indicating further investigations are needed to evaluate its commercial application. Seawater desalination has provided freshwater for over 6% of the world’s population. One of the commonplace models of forward osmosis seawater treatment is using a hollow fiber membrane. The key parameter in the hollow fiber membrane model is the minimum draw solution flow rate.

When the flow rate increases, the energy requirement increases as well. In an ideal Forward Osmosis process, CDO and CFI should be equal. Figure 2-3 below shows the schematic diagram of the forward osmosis membrane module. To assess the energy consumption in the FO process, the solution concentrations and flow direction of the module should be determined first. The data supports that the energy required for pumping the draw solution is less than that for pumping feed solution. To determine the effects of the direction of hydraulic pressure in the module, different modules with various concentration solutions and flow rates are designed to compare the energy efficiency. In conclusion, the results demonstrate that to reduce the energy consumption of seawater desalination, the FO module need to optimize these diameters. Also, the flow rates and concentrations of draw and feed solutions play a major role in terms of energy efficiency. The module illustrates that when a high flow rate feed solution is on the shell side and a draw solution with a low flow rate is on the lumen side, the system consumes less energy consumption. Another vital implementation of Forward Osmosis is food concentration/enrichment. Multiple studies concluded that FO is efficient when it comes to dewatering for food production.  Compared to the traditional concentration method, such as pressure driven membrane, FO requires less energy and yields less nutrition loss. Nutrition loss refers to the reduction of monomers fructose here. A closed-loop feed solution and draw solution system are built as figure 2-4 below. Garcia-Castello tested two membranes in the system above. A flat sheet of cellulosic membrane and an AG reverse osmosis membrane. AG membrane refers to a certain designation of membrane manufactured by Sterlitech. The result shows that the AG membrane has a higher salt rejection rate. During the procedure, once the water flux reaches a constant value, a feed stock solution is added to the tank to reach the next feed solution concentration.

At the end of the experiment, the highest feed solution is 1.65M sucrose. By comparing performances of different membranes, the AG membranes yield better results when concentrating on sucrose solution due to its tucker support structure. The temperature also has a significant impact on water flux. Usually, higher temperature yields higher water fluxes. Compared to the concentration factor of RO, FO has a better concentration factor of 5 while it requires much less energy.Fertilizer drawn forward osmosis applies the forward osmotic dilution of the fertilizer draw solutions. This technology could be used for direct agricultural irrigation. Fortunately, most of the fertilizers could be used as a draw solution for FDFO. Fertilizer drawn forward osmosis shares the same principle with forward osmosis. Freshwater as feed solution flows through the semi-permeable membrane to the fertilizer draw solution under the natural osmotic pressure. Additional treatments might be required to reach the water quality for different purposes. Regarding the nitrogen removal purpose for this review, operating conditions such as feed solution concentration, feed solution water flow rate, and specific water flux can affect the effectiveness of nitrogen removal. Fertilizer-drawn forward osmosis has common applications in water recycling and fertigation applications. Nanofiltration is a viable solution for diluting the fertilizer draw solution for recycling purposes. Fertilizer-draw forward osmosis technology has used brackish water, brackish groundwater, treated coal mine water, and brine water as the feed solutions. In another word, water that has a relatively lower total dissolved solid could be feed solution for fertilizer drawn forward osmosis. Moreover, fertilizer drawn forward osmosis is also effective on biogas energy production when it is applied to an anaerobic membrane bioreactor as a hybrid process. In conclusion, fertilizer drawn forward osmosis is effective for sustainable agriculture and water reuse. Its considerable recovery rate could be used as the hydroponics part in an anaerobic membrane bioreactor . Due to the scarcity of fresh water in arid areas, hydroponics has been used for vegetable production. In the field of hydroponics, a subset of hydroculture, crops are cultivated in a soilless environment, their roots are exposed to mineral nutrient solutions or fertilizers. Without soil culture, this type of agricultural production precludes certain aspects that are associated with traditional crops production, including soil pollution, lower fertilizer utilization efficiency, or spread of pathogens. This technology also allows the production of crops in arid, infertile, or simply too populated areas. However, economic cost aside, this technique requires both a large amount of fresh water and fertilizers compared with soil-based crops production. This could easily cause detrimental effects to the environment such as water waste and contamination, excessive nitrogen, potassium, and phosphate resulting in eutrophication. To achieve the balance between cost, efficiency, and quality, reverse osmosis and ultrafiltration are more advanced and general approaches compared to biological seawater treatments. In terms of treating seawater, the hydroponic nutrient solutions demonstrate similar performance compared with other aqueous solutions of a lower molecular weight salt. By utilizing certain membrane technologies, treated effluent has reduced the presence of pathogens and remained the ability to be better integrated into the fertigation system for direct application. The potential of the fertilizer drawn forward osmosis process was investigated for brine removal treatment and water reuse through energy-free osmotic dilution of the fertilizer for hydroponics. Nanofiltration is a pressure-driven membrane process, it refers to a special membrane process that removes dissolved solutes. The membrane is with pores ranging from 1 to 10 nanometers, hence the name “nanofiltration”. Nanofiltration uses a similar principle as reverse osmosis, it is a water purification process that requires pressure,fodder system for sale and its membranes are permeable to ions. Nanofiltration is practical in removing organic substances from coagulated surface water, it is also economic and environmentally sustainable. In terms of size and mass of solvents removed by nanofiltration membranes, they usually operate in the range between reverse osmosis and ultrafiltration: removing organic molecules with molecular weights from 200 to 400. Nanofiltration membranes can also effectively remove other pollutants including endotoxin/pyrogen, pesticides, antibiotics, soluble salts, etc.

Depending on the type of salt, it has various removal rates. For salts containing divalent anions, such as magnesium sulfate, the removal rate is around 90% to 98%. However, regarding salts containing monovalent anions, such as sodium chloride or calcium chloride, the removal rate is lower, which is between 20% to 80%. The osmotic pressure across the membrane is typically 50-225 psi. One of the advantages of Nanofiltration is that it uses lower pressure and sustains higher water flux. Plus, it has highly selective rejection properties. Typical applications for nanofiltration membrane systems include the removal of color , total organic carbon from surface water, reduction of total dissolved solids , and the removal of hardness or radium for well water. In 1952, Congress passed the Saline Water Conversion Act, which is aimed at resolving the shortage of freshwater and excessive use of underground water. Two years after the act, the first desalination plant in the United States was built in 1954 at Freeport, Texas. The planet is still operative to date and is undergoing improvement. U.S. Department of Agriculture predicts to supply 10 million gallons of fresh water per day in 2040. The Claude “Bud” Lewis Carlsbad Desalination is the largest desalination plant in the U.S. The plant delivers almost 50 million gallons of fresh water to San Diego County daily. Due to objective conditions, desalination has prevailing existence in regions such as the Middle East, where the largest desalination plant worldwide stands in terms of freshwater production. With 17 reverse osmosis units and 8 multi-stage flashing units, the plant can produce more than 1,400,000 cubic meters of fresh water per day. In 1960, there were only 5 desalination plants in the world. By the mid-1970s, as the conditions of many rivers deteriorated, around 70% of the world’s population could not be guaranteed sanitary and safe freshwater. As a result, water desalination has become a strategic choice commonly adopted by many countries in the world to resolve the shortage of fresh water, its effectiveness and reliability have been widely recognized. The limitation and uneven distribution of freshwater resources have been one of the most prevailing and serious problems faced by people living in arid areas. To reduce its severity, saline water or wastewater desalination has always been a constantly researched and applied solution. In many arid regions, the desalination of seawater is evaluated as a promising solution. Despite that seawater holds around 96.5% of global water resources , the global-scale application of seawater desalination is hindered by the cost, both financially and energy-wise. With the development of energy-saving technologies for seawater desalination, it is viable to use saline, such as seawater and brackish water to produce freshwater for industries and communities. Commonly used methods require water pumping and a considerable amount of energy. As a result, forward osmosis is receiving increasing interest in this field since the FO process requires much less energy. One of the research teams at Monash University in Australia has demonstrated a solar-assisted FO system for saline water desalination using a novel draw agent. The research team led by Huanting Wang and George P. Simon has investigated the potential of a thermoresponsive bilayer hydrogel-driven FO process utilizing solar energy to produce fresh water by treating saline water. This Forward osmosis process is equipped with a new draw agent: a thermo responsive hydrogels bilayer. Compared to one of the most used draw agents , this duallayered hydrogel is made of sodium acrylate and N-isopropyl acrylamide , which induces osmotic pressure differences without the need for regeneration. The thermo responsive hydrogels layers generate high swelling pressure when absorbing water from high-concentrated saline. During testing, researchers used a solution of 2,000 ppm of sodium chloride, which is the standard NaCl concentration for brackish water. Water passes through the semipermeable membrane and is drawn from saline solution to the absorptive layer . The hydrogel can absorb water up to 20 times larger than its regular volume. Next, the thermo responsive hydrogel composed only of NIPAM then absorbs water from the first layer. When the dewater layer is heated to 32 °C, which is the lower critical solution temperature , the gel collapses and squeezed out the absorbed fresh water. Draw agents like ammonium bicarbonate are required to be heated up to 60 °C, then distilled at a lower temperature for regeneration. By focusing the sunlight with a Fresnel lens, the concentrated solar energy can help dewatering flux reach 25 LMH after 10 minutes, which is similar to the water flux of ammonium bicarbonate. 

Network analysis methods are used to analyze the resulting relational structure of the mental model

Furthermore, 15N-Glu-feeding experiments indicated that tea plants can absorb exogenously applied amino acids that can then be used for N assimilation. In addition, we demonstrated that CsLHT1 and CsLHT6 are involved in the uptake of amino acids from the soil in the tea plant.It has been suggested that tea plants grown inorganic tea plantations are subjected to N-deficient conditions due to the absence of inorganic fertilizer. Compared with conventional tea, that produced under organic management systems contains higher levels of catechins that are linked to antioxidant effects of tea infusions. However, organic tea contains lower levels of amino acids that are also important compounds in terms of tea quality. The decay of large amounts of pruned tea shoots may contribute significantly to soil amino-acid levels inorganic tea plantations; the decomposition of such organic matter and nutrient recycling depends largely on soil fungi. Interestingly, the long-term application of high amounts of N fertilizer was found to reduce soil fungal diversity in tea plantations. This likely could account for why we observed higher amino-acid contents in the organic tea plantation compared with the conventional tea plantation . This implies a more important role for soil amino acids in tea plant grown inorganic tea plantations.It has been reported that, in addition to inorganic N, amino acids can support tree growth. As a perennial evergreen tree species, the tea plant can also use organic fertilizer. However, the role of soil amino acids in tea plant growth and metabolism has not yet been investigated. In this study, we observed that the tea plant could take up 15N-Glu, and Glu feeding increased the aminoacid contents in the roots . This revealed that tea plants can take up amino acids from the soil for use in the synthesis of other amino acids. In our study, nine amino acids were detected in the soil of an organic tea plantation, and the utilization of exogenous Glu was analyzed in detail. In future studies,hydroponic nft it will be important to test the roles of various mixtures of amino acids for use as fertilizers for the growth and metabolism of the tea plant.

The molecular mechanism underlying the uptake of amino acids from the soil by trees has not been thoroughly studied. In this study, we identified seven CsLHTs that were grouped into two clusters, which was consistent with LHTs in Arabidopsis . CsLHT1 and CsLTH6 in cluster I have amino-acid transport activity , which is also consistent with AtLHT1 and AtLHT6. Moreover, these two genes were highly expressed in the roots and both encode plasma membrane-localized proteins . These findings support the hypothesis that CsLHT1 and CsLHT6 play important roles in amino-acid uptake from the soil . However, the members of cluster II, CsLHT2, CsLHT3, CsLHT4, CsLHT5, and CsLHT7, did not display amino-acid transport activity . Interestingly, except for AtLHT1 and AtLHT6, there are no other AtLHTs being shown to transport amino acids. It is possible that cluster II LHTs are involved in the transport of metabolites other than amino acids. For example, AtLHT2 was recently shown to transport 1-aminocyclopropane-1- carboxylic acid, a bio-synthetic precursor of ethylene, in Arabidopsis.LHT1 has been thoroughly characterized as a high affinity-amino-acid transporter and has a major role in the uptake of amino acids from the soil in both Arabidopsis and rice. In contrast, there is only one report on the function of AtLHT6; it is highly expressed in the roots, and the atlht6 mutant presented reduced aminoacid uptake from media when supplied with a high amount of amino acids. Although the authors did not characterize the amino-acid transport kinetics for AtLHT6, their results are consistent with this protein being a low-affinity-amino-acid transporter. In the present study, we characterized CsLHT1 to be a high-affinity amino-acid transporter , with a capacity to transport a broad spectrum of amino acids . By contrast, CsLHT6 exhibited a much lower affinity for 15N-Glu, and it also displayed higher substrate specificity. Considering that amino-acid concentrations in the soil of tea plantations are low , CsLHT1 may play a more important function than CsLHT6 in the uptake of amino acids from the soil into tea plants. However, in soils, amino-acid contents could be much higher, locally, particularly in the vicinity of decomposing animal or vegetable matter. In this situation, CsLHT6 may play an important role in the uptake of amino acids. In addition,CsLHT6 is also highly expressed in the major veins of mature leaves , suggesting a role for CsLHT6 in amino-acid transport within these tea leaves.

Given that protocols for the efficient production of transgenic tea cultivars are lacking, CsLHT1 and CsLHT6 expression cannot be modulated by either over expression or CRISPR/Cas9 gene editing. However, in China, there is an abundance of tea plant germplasm resources. CsLHT1 and CsLHT6 are potential gene markers for selecting germplasms that can efficiently take up amino acids. Moreover, germplasms with high CsLHT1 or CsLHT6 expression can be used as root stocks for grafting with elite cultivars to improve the ability of these cultivars to take up amino acids from the soil. Alternatively, these germplasms can be utilized through gene introgression. These grafted lines that can efficiently take up amino acids or novel cultivars should be better suited for use inorganic tea plantations than in conventional tea plantations.One of the core goals of sustainability science is understanding how practitioners make decisions about managing social-ecological systems . In the context of sustainable agriculture, an important research objective is quantifying the economic, environmental, and social outcomes of different farm management practices . However, it is equally important to understand how farmers conceptualize the idea of sustainability and translate it into farm management decisions. The innumerable and often vague definitions of sustainable agriculture make this a challenging task, and fuel the debate about linking sustainability knowledge to action. This debate will remain largely academic without empirical analysis of how farmers think about sustainability in real-world management contexts. These questions are not only relevant to agriculture, but also to all social-ecological systems and the knowledge networks that are in place to support decision making. This paper addresses these issues by analyzing farmer “mental models” of sustainable agriculture. Mental models are empirical representations of an individual’s or group’s internally held understanding of their external world . Mental models reflect the cognitive process by which farmer views about sustainable agriculture are translated into farm management decisions and practice adoption. Our mental models were constructed from content coding of farmers’ written definitions of sustainable agriculture, and were analyzed using network methods to understand the relational nature of different concepts making up a mental model.

We test three hypotheses about mental models of sustainable agriculture. First, mental models are hierarchically structured networks, with abstract goals of sustainability more central in the mental model, which are linked to peripheral concrete strategies from which practitioners select to attain the goals. Second, goals are more likely to be universal across geographies, whereas strategies tend to be adapted to the specific context of different social-ecological systems. Third, practitioners who subscribe to central concepts in the mental model will more frequently exhibit sustainability-related behaviors, including participation in extension activities and adoption of sustainable practices. Our mental model data were drawn from farmers in three major American viticultural areas in California: Central Coast, Lodi, and Napa Valley. California viticulture is well suited for studying sustainability. Local extension programs have used the concept of sustainability since the 1990s ,hydroponic channel and farmer participation in sustainability programs is strong . Furthermore, viticulture is geographically entrenched , with viticultural areas established on the basis of their distinct biophysical and social characteristics . Hence, we expect wine grape growers to have well-developed mental models of sustainability, with geographic variation reflecting social-ecological context.or group’s internally held understanding of the external world . Group mental models, which are the focus of this paper, represent the collective knowledge and understanding of a particular domain held by a specific population of individuals. Mental models are an empirical snapshot of the cognitive process that underpins human decision making and behavior. Mental models complement more traditional approaches to understanding environmental behavior by highlighting the interdependent relationships among attitudes, norms, values, and beliefs . For example, the Values-Beliefs-Norms model of environmental behavior hypothesizes a causal chain running from broad ecological values, to beliefs about environmental issues, to more specific behavioral norms. The network approach used here shows how these types of more general and specific concepts are linked together in a hierarchical and associative structure. Mental models have evolved into an important area of research in environmental policy, risk perception, and decision making . A growing number of researchers are using mental models to better understand decision making in the context of social-ecological systems . Two approaches that are especially relevant to this paper are Actors, Resources, Dynamics, and Interactions and Consensus Analysis . The ARDI approach uses participatory research methods to construct a group mental model of the interactions among stakeholders, resources, and ecological processes . The final product is a graphic conceptualization of how the group perceives the social-ecological system, its components, and their place in it, which can be used to inform management strategies. The CA approach relies on similar data-collection techniques to elicit a group mental model that captures stakeholders’ beliefs and values pertaining to how the social-ecological system should be managed and for what purpose . The mental models are then analyzed using quantitative methods to assess agreement among individuals and identify points for consensus. Along with addressing research questions about practitioner knowledge and decision making, both approaches have been used to facilitate multi-stake holder management of social-ecological systems . This paper conceptualizes group mental models as “concept networks” comprised of nodes representing unique concepts and ties representing associations among concepts.The concept network approach is different from ARDI and CA in that network analysis methods are used to analyze the structure of mental models and measure the importance of individual concepts based on their position in the concept network. This approach follows from Carley’s work , which is founded in the theoretical argument that human cognition operates in an associative manner .

When a given concept is presented to the individual, memory is searched for that concept, ties between the concept and associated concepts are activated, and associated concepts are retrieved. The more associations a given concept has, the more likely the concept is to be recalled. Highly connected concepts serve as cognitive entry points for accessing a constellation of associated ideas. We elicited our mental models from written text of farmers’ definitions of sustainable agriculture, and follow Carley in arguing that written language can be taken as a symbolic expression of human knowledge . It is important to note that our mental models deviate from Carley’s in that the associations among concepts are nondirectional and do not represent causality between concepts. Ties in our concept network represent concept co-occurrence, where two concepts occurred together in a single definition of sustainable agriculture. See Methods for more details.Hypothesis 1 is that mental models are hierarchically structured, with abstract concepts constraining the cognitive associations among more concrete concepts. For example, practitioners who define sustainability primarily as environmental responsibility versus economic viability may evaluate the benefits and costs of management practices with different criteria. This perspective is related to models of political belief systems where specific attitudes on public policy issues are predicted by general beliefs about policies and core values . Construal-level theory also suggests that hierarchical belief-systems contain abstract, superordinate goals related to subordinate beliefs about actions needed to achieve them . The hierarchical structure reflects a basic principle of cognitive efficiency in taxonomic categorization , where more abstract concepts provide cognitive shortcuts to retrieve specific linked attributes . The concepts making up mental models of sustainability can be divided into two basic types, each with different levels of abstraction: goals and strategies . Abstract goals are desirable properties, attributes, and characteristics of a sustainable system to be realized. Examples taken from this study include environmental responsibility, economic viability of the farm enterprise, continuation into the future, or soil health and fertility. Strategies are more concrete and include practices or approaches that are thought to contribute to the realization of abstract goals.

It was found that rate of cortical death was faster in hexaploid wheat and positively associated with root age

The present study was conducted to address the dosage effect of 1RS translocation in bread wheat. We used wheat genotypes that differ in their number of the 1RS translocations in a spring bread wheat ‘Pavon 76’ genetic background. For generating F1 seeds, Pavon 1RS.1AL was the preferred choice due to its better performance for root biomass than other 1RS lines . Here, we report the dosage effect of a 1RS chromosome arm on the morphology and anatomy of wheat roots. The results from this study validate previous results of the presence of genes for rooting ability on the 1RS chromosome arm. This study also provides evidence for presence of genes affecting root anatomy on 1RS. From previous chapters of this dissertation and earlier studies , it was clear that there was a gene present on 1RS chromosome arm which affects root traits in bread wheat. But there was no report on the chromosomal localization of any root anatomical trait in bread wheat. The purpose of this study was to look for variation in root morphology and anatomy among different wheat genotypes and then determine how these differences are related to different dosages of 1RS in bread wheat. During this study, we came to some very interesting conclusions: 1) F1 hybrids showed a heterotic effect for root biomass and there was an additive effect of the 1RS arm number on root morphology of bread wheat; 2) There was a specific development pattern in the root vasculature from top to tip in wheat roots and 1RS dosage tended to affect root anatomy differently in different regions of the seminal root. Further, the differences in root morphology,hydroponic gutter and especially anatomy of the different genotypes have specific bearing on their ability to tolerate water and heat stress. The effect of number of 1RS translocation arms in bread wheat was clearly evident from their averaged mean values for root biomass. RA1 and RAD4 were ranked highest while R0 ranked at the bottom .

These results supported the previous studies on the performance of wheat genotypes with 1RS translocation where 1RS wheats performed better in grain yield but similar for shoot biomass . Genotype RD2 performed slightly better than R0 for root biomass because of its poor performance in one season otherwise it showed better rooting ability in the other three seasons. Here, all the genotypes with 1RS translocations showed higher root biomass than R0 which carried a normal 1BS chromosome arm. Data in this study suggested two types of effects of 1RS on wheat roots. First, an additive effect of 1RS, there was increase in root biomass with the increase in 1RS dosage from zero to two and then to four . Second was a heterotic effect of 1RS on root biomass and shoot biomass. MPH and HPH of the F1 hybrid were higher for root biomass than for shoot biomass . This further explained the more pronounced effect of 1RS on root biomass than shoot biomass. Significant positive heterosis was observed for root traits among wheat F1 hybrids and twenty seven percent of the genes were differentially expressed between hybrids and their parents . The possible role of differential gene expression was suggested to play a role in root heterosis of wheat and other cereal crops . In a recent molecular study of heterosis, it was speculated that upregulation of TaARF, an open reading frame encoding a putative wheat ARF protein, might be contributing to heterosis observed in wheat root and leaf growth There is large void in root research involving study of root anatomy in wheat as well as other cereal crops. Most of the anatomical literature is either limited to root anatomy near the base of the root or near the root tip in young seedlings . There is still a general lack of knowledge about the overall structure and pattern of whole root vasculature during later stages of the growth in cereals especially in wheat. In the present study, root anatomical traits were studied in the primary seminal root of different wheat genotypes containing different dosages of 1RS translocation arms at mid-tillering stage .

Root sections were made from three regions along the length of the root, viz. top of the root, middle of the root and root tip, to get an overview of the complete structure and pattern of root histology relative to differences in 1RS dosage. Comparison of different regions of root of a genotype showed a transition for metaxylem vessel number and CMX area from higher in top region of the root to a single central metaxylem vessel in the root tip. Diameter of the stele also became narrower towards the root tip as the plant roots grow into deeper layers of soil. In the root tip only central metaxylem vessel diameter and area were traceable as other cell types were still differentiating. This developmental pattern was consistent across the different wheat genotypes used in this study. Interestingly, there was variation in timing for the transitions in root histology among genotypes and this variation was explained by dosage of 1RS arm in bread wheat. RD2 and RAD4 transitioned earlier from having multiple metaxylem vessels and a larger stele to a single, central metaxylem vessel and smaller stele than did R0 and RA1. In the top region, all the root traits were significantly different among genotypes except average CMX vessel diameter and CMX vessel number . Here, the average CMX diameter was calculated from the average of diameters of all the CMX number of that subsequent genotype and hence, the number of CMX vessels, was different in each genotype so was the total CMX vessel area. Interestingly, all the root traits in the top region showed negative slope in regression analysis and most of them were significant especially stele diameter, total CMX vessel area, and peripheral xylem pole number. Variation in all the traits was explained by number of 1RS dosages in wheat genotypes and root traits were smaller with higher number of 1RS dosage . Significant positive correlation among almost all the root traits from topregion and mid-region of the roots suggested their interdependences in growth and development. Root diameter could not be measured for all the replicates of each genotype because of the degeneration and mechanical damage to the cortex and epidermis.

Earlier, a study on the rate of cortical death in seminal roots was investigated in different cereals.In the root tip, only two traits, CMX vessel area and CMX vessel diameter, were traceable because of the status of root tip development . Negative slope and significant R2 value in regression analysis explained the effect of 1RS dosage on the CMX vessel area and CMX vessel diameter. This suggested narrow metaxylem vessels with increase in 1RS dosage . In roots, central metaxylem vessel is the first vascular element to be determined and differentiate . Here, serial cross sections of the root tips also confirmed it as the first differentiated vascular element in wheat. The other vascular components differentiate thereafter in relation to first formed metaxylem vessel . Feldman first reported that all the metaxylems were not initiated at the same level. Root morphology and root architecture are responsible for the water and nutrient uptake while in root anatomy, xylem vessels are essential for their transportation to the shoots to allow continued photosynthesis. Variations in xylem anatomy and hydraulic properties occur at interspecific, intraspecific and intraplant levels . Variations in xylem vessel diameter can drastically affect the axial flow because of the fourth-power relationship between radius and flow rate through a capillary tube, as described by the Hagen–Poiseuille law . Thus, even a small increase in mean vessel diameter would have exponential effects on specific hydraulic conductivity for the same pressure difference across a segment . Xylem diameters tend to be narrower in drought tolerant genotypes ,u planting gutter and at higher temperature . Smaller xylem diameters pose higher flow resistance and slower water flow which helps the wheat plant to survive water stressed conditions. Richards and Passioura increased the grain yield of two Australian wheat cultivars by selecting for narrow xylem vessels in seminal roots. The results of this study showed that the presence of 1RS in bread wheat increased the root biomass and reduced the dimensions of some root parameters especially the central metaxylem vessel area and diameter in the root tip as well as in the top of the root . Manske and Vlek also reported that wheat genotypes with 1RS translocated chromosome arm had thinner roots and higher root-length density compared with normal wheat with 1BS chromosome arm under field conditions. These results might suggest higher root number or extensive root branching in 1RS translocation wheats. Among 1RS translocation wheats, significant association was observed between root biomass and grain yield under well-watered and droughted environments . Narrow metaxylem vessels and higher root biomass provide 1RS translocation wheats with better adaptability to water stress and make them better performers for grain yield. Plant development is particularly sensitive to light, which is both the energy source for photosynthesis and the regulatory signal . Upon germination in the dark, a seedling undergoes a developmental program named skotomorphogenesis, which is characterized by elongated hypocotyl, closed cotyledon, apical hook, and short root. Exposure to light promotes photomorphogenesis, which is characterized by short hypocotyl, open cotyledon, chloroplast development and pigment accumulation . In addition to light, photomorphogenesis is also regulated by several hormones, including brassinosteroid , auxin, gibberellin and strigolactone .

The molecular mechanisms that integrate the light and hormonal signals are not fully understood. Light signal is perceived by photoreceptors, which regulate gene expression through several classes of transcription factors . Downstream of photoreceptors, the E3 ubiquitin ligase COP1 acts as a central repressor of photomorphogenesis . COP1 targets several transcription factors for proteasome-mediated degradation in the dark . Light-activated photoreceptors directly inhibit COP1’s activity, leading to the accumulation of the COP1- interacting transcription factors, such as HY5 , BZS1, and GATA2, which positively regulate photomorphogenesis . Recent studies have uncovered mechanisms of signal crosstalk that integrate light signaling pathways with BR, GA, and auxin pathways . The transcription factors of these signaling pathways directly interact with each other in cooperative or antagonistic manners to regulate overlapping sets of target genes . BR has been shown to repress, through the transcription factor BZR1, the expression of positive regulators of photomorphogenesis, including the light-stabilized transcription factors GATA2 and BZS1 . BZS1 is a member of the B-box zinc finger protein family, which has two B-box domains at its N terminus without any known DNA binding domain . It is unclear how BZS1 regulates gene expression. Recent studies have shown that SL inhibits hypocotyl elongation and promotes HY5 accumulation in Arabidopsis plants grown under light , but the molecular mechanisms through which SL signaling integrates with light and other hormone pathways remain largely unknown. Immunoprecipitation of protein complexes followed by mass spectrometry analysis is a powerful method for identifying interacting partners and post translational modifications of a protein of interest . In particular, research in animal systems has shown that combining stable isotope labeling with IP-MS can quantitatively distinguish specific interacting proteins from non-specific background proteins . Stable isotope labeling in Arabidopsis has been established as an effective method of quantitative mass spectrometry ; however, combination of SILIA with IP-MS has yet to be established. To further characterize the molecular function of BZS1, we performed SILIA-IP-MS analysis of the BZS1 protein complex, and identified several BZS1-accociated proteins. Among those are COP1, HY5, and BZS1’s homologs STH2/BBX21 and STO/BBX24. We further showed that BZS1 directly interacts with HY5, and positively regulates HY5 RNA and protein levels. Genetic analysis indicated that HY5 is required for BZS1 to inhibit hypocotyl elongation and promote anthocyanin accumulation. In addition, BZS1 is positively regulated by SL at both transcriptional and translational levels. Plants over expressing a dominant-negative form of BZS1 show an elongated-hypocotyl phenotype and reduced sensitivity to SL, similar to the hy5 mutant. Our results demonstrated that BZS1 acts through HY5 to promote photomorphogenesis and is a crosstalk junction of light, BR and SL signals. This study further advances our understanding of the complex network that integrates multiple hormonal and environmental signals.

The pH dependence of bulk nanobubble formation can also be analysed using this equation

However, as recently reported by Ushikubo, nanobubbles of inert gases do possess similar lifetimes and are formed from helium, neon, and argon, and since the only intermolecular forces of note they experience are van der Waal’s forces of attraction, Lifshitz forces and dipole-dipole interactions, it can be assumed that these are also strong enough, and the gases sufficiently inert, for the same mechanism as well as the steric hindrance of the hydroxide ions to apply to the same case. Considering the formation of a 1 μm micro-bubble which eventually shrinks into a nanobubble, the number of ions available to it for stabilisation from the water it displaces upon formation, at pH 7, is approximately 33 ions, which if all the ions were adsorbed, does not agree with the zeta potentials reported by Takahashi et. al. for micro-bubbles of comparable size, which by equation is given to be approximately 495 ions. It follows that the ions which are adsorbed diffuse toward the nanobubble surface from the surrounding bulk fluid, which can explain the apparent generation of free radicals observed by Takahashi et. al., since there is now a minuscule concentration difference present to drive the diffusion. The availability of hydroxide ions also depends on the pH, and at pH 7 it is thus possible for stable nanobubbles to form as is reported by Ushikubo, as well providing a mathematical treatment for their stabilization and the calculation of their surface charge. At lower pH, in the absence of other ions, the concentration of stabilized ions would be lower due to the lower availability of hydroxide ions and the increased time needed for them to diffuse to the surface of the nanobubble, allowing it more time to shrink. The dependence of the size of the bulk nanobubble on external pressure is given by equation . Of the external pressure, the proportion of the atmospheric pressure to the total value of the actual pressure, the rest being the pressure exerted by the fluid. However, the major component to the force contributing to the shrinkage of the nanobubble is the surface tension,dutch buckets system which also increase with the size of the nanobubble. Thus, for higher external pressures and given that a limited amount of gas is dissolved in the fluid, the equation gives a trend of increasing nanobubble size with increasing external pressure.

However, due to the limited amount of gas available, it is expected that the number of nanobubbles formed, i.e. concentration will decrease, while still giving higher particle size. This is confirmed by Tuziuti and co-workers through their observations of air nanobubbles in water. The temperature term appears only in the term that describes the internal pressure, causing a linear increase with temperature, not taking into account the increase in molecular motion due to heat, as well the increased energy of the surface ions. Thus, it also shows that the internal pressure will increase with the increase in temperature. This will, in turn, cause a reduction in the radius if all other terms are kept the same. Thus, we can say that given a limited amount of gas dissolved in the solvent, an increase in temperature will give smaller nanobubbles, but will also cause an increase in concentration of the nanobubbles in the solvent. It is also possible that zeta potentials may decrease, as thermally agitated hydroxide ions may be more susceptible to de-adsorption and may return to solution more easily. Conversely, as lower temperatures, larger bubbles may form, especially by the method of collapsing micro-bubbles, and larger numbers of hydroxide ions may be adsorbed on the surface of the nanobubble, giving longer lifetimes. Bulk nanobubbles are, in essence, minuscule voids of gas carried in a fluid medium, with the ability to carry objects of the appropriate nature, that is, positively charged for a length of time that is significant, if the nanobubble is left alone, yet is also controllable, since the bubbles can be made to collapse with ultrasonic vibration, or magnetic fields. The applications, then, seem to be limited only by how we can manipulate and design systems that make use of these properties for new technology in several fields. As mentioned before, thus far technology has made use of the uncontrolled collapse and generation of bulk nanobubbles, in the fields of hydroponics, pisciculture, shrimp breeding, and algal growth, while the property of emission of hydroxide ions during collapse has been applied to wastewater treatment.

Here and there, there are indications of greater possibilities, as evidenced by research into their ability to remove microbial films from metals, to remove calcium carbonate and ferrous deposits from corroded metal, the use of hydrogen nanobubbles in gasoline to improve fuel efficiency, and the potential application for to serve as nucleation sites for crystals of dissolved salts. The following sections elaborate on further applications which are possible in the near future. Proton exchange membrane fuel cells, are finding wide application in several fields due to the ease of their deployment, the low start-up times, and the convenience of their size and operating temperatures . However, significant limitations exist for their wider application, which can broadly be classed under the headings of catalysis, ohmic losses, activation losses, and mass transfer losses. The first of these is due to the rate of catalysis of the splitting of hydrogen, which cannot be pushed beyond a certain limit due to the constraints of temperature. But the larger issue is the cost of the catalyst itself, which is a combination of platinum nanoparticles and graphite powder, which provides the electrical conductivity. The inclusion of platinum presents a significant cost disadvantage, and while efforts are ongoing to reduce or replace platinum as a catalyst, these are still experimental and much research is ongoing in this field. The second limitation is due to ohmic losses, which accumulate due the proton exchange membranes, also termed the electrolyte, and can only be reduced by reducing the thickness of the membrane. Current popularly used membranes are usually made of Nafion, a sulphonate-grafted derivative of polytetrafluoroethylene marketed by DuPont, but experimental membranes include the use of graphene, aromatic polymers, and other similar materials which possess a high selective conductivity toward protons [ref]. However, beyond a certain thickness the membranes are unable to mechanically support themselves, and often mechanical failure of the membrane will cause a break in operations.The third limitation is due to the start-up conditions of the fuel cell, and are a matter of the mechanics of operation of the fuel cell itself. The last limitation is due to the transport of hydrogen and oxygen to the triple phase boundaries around the catalyst and the transport of water away from them, and is a significant concern for the operation and efficiency of PEMFCs.

However, the current PEMFCs depend on gaseous hydrogen and oxygen, which are released from a compressed source and derived from air respectively. This necessitates a mechanically strong membrane and construction to resist the operating pressures. However, the inclusion of the gas as a nanobubble dissolved in water presents new possibilities, used in combination with microfluidic technology. It becomes possible to also replace both membranes and catalysts with materials that have been hitherto discarded fro being too mechanically weak, such as graphene, and the possibility of using graphene as a combined catalyst and proton exchange membrane, as nanobubbles of hydrogen and air, dissolved in water, to act as the reservoirs for the fuel and oxidant. Such as system would operate on the basis that nanobubbles are negatively charged,dutch buckets and would hence be attracted to the graphene through which current would be passed in order to activate the process. Air and hydrogen nanobubbles would be separated by the graphene membrane, and be adsorbed to opposite sides of it. The graphene membrane would also have a potential difference applied across it in the plane of the graphene layer. This would, in turn, permit the hydrogen to be catalyzed to protons [ref], and hence be conducted across the graphene [ref], allowing it react with the oxygen to form more water, which would be carriedaway with the flow. Microfluidic bipolar plates would enable the construction of such a device, and such fuel cells could become the future source of energy for several applications. The advantages of such a system would be numerous. Firstly, graphene is far cheaper than platinum, and can be used as a catalyst of almost comparable quality, in addition to also being the conductor for the removal of electrons released during catalysis. Secondly, the thickness of a graphene sheet is in the range of nanometers, which would mean that ohmic losses would, quite possibly, be nearly eliminated. Additionally, due to the flow of water as a solvent, the losses due to the mass transport of water away from the triple phase boundary, and that due to transport of hydrogen and oxygen to the triple phase boundary, would also be significantly reduced. The last, but not the least advantage would be the reduction in the size of one fuel cell. The voltage generated by fuel cells is independent of the size they are, which would mean that a much larger number of fuel cells can fit in the same area as currently applied fuel cells, which will provide a much larger voltage. Polymeric foams have been a staple of several products since their inception, and pore size is one of the key properties of the foam that determines its performance. In general, the larger the pore size, and more the pore count, the lighter the foam is. However, both can come at the cost of reduced wall thickness of the pores, which makes the whole foam less able to deform elastically and more susceptible to tearing and heat damage, while substantially reducing fatigue resistance and creep resistance. In general, therefore, the standard practice is to achieve a balance between pore size and pore count, measured in pores per volume, so as to achieve the desired properties. However, the voids rarely go below the size of one micron, and this in turn places a limit on the number of pores per volume, thus limiting the number of pores it is possible to introduce, as well as the amount of gas that can be introduced into the foam system. While there are several methods of foam manufacturing, including in-situ foam molding, and pre-mixed foam molding, none of these offer pore sizes lower than a few microns reliably and controllably. Furthermore, many of the polymers used in the construction of these foams can either be dispersed or dissolved into water, such as polyamides, polystyrene, polyesters, and polyurethanes. This offers a unique opportunity to introduce nanobubbles into the system, by first dispersing the gas into solution by means of a micro-bubble generator, and then dispersing the polymer, either in dilute solution form or as a monomer, and then either coagulating the dispersion, or polymerizing it, or cross linking the chains in solution to create a foam with pore sizes in the nanometer range. At standard pore counts, this would offer a very high wall thickness, which necessitates a large increase in the concentration of nanobubbles which should be introduced to return wall thickness to the same levels as a microporous foam. The pores can then be opened, if so desired, by a microneedle array, or by other methods such as guided bursts of ultrasound, creating such structures as channels only nanometers in width through the foam, and presenting new possibilities for water filtration and purification, as well as for testing and for further application in water quality testing and other similar applications. As of now, there are several applications for such open-celled foams, such as the production of nanopure water, which are expensive due to the filtration equipment needed. Thus, opencelled polymeric foams have direct application to these areas, where as closed-celled foams are potentially lighter and stronger, as well as tougher than other foams with larger pores and lower pore counts. Thus, it is reasonable to suggest that nanobubble technology will find widespread use in this particular application, especially when the cost factor is taken into account.

Orchard growers who were aware of pesticide problems and practices were also more likely to implement BMPs

The majority of producers in the Sacramento River Valley have opted to join the SVWQC, the area’s most encompassing watershed-management coalition, because it allows them to share the costs of the monitoring program, facilitates local oversight, takes advantage of local knowledge and is less intrusive on individuals. Such coalitions also focus on the watershed, attempt to consider the cumulative effects from multiple operations and try to integrate some of the elements of collaborative policy at the local level . However, some producers in the Sacramento River Valley have criticized the non-voluntary nature of the program as an unnecessary regulatory burden. The critical role of diffusion networks is illustrated by the SVWQC’s nested watershed approach, which divides the larger watershed into 10 sub-watershed groups, based on county and hydrological boundaries . The sub-watershed groups are typically headquartered locally with organizations such as the county agricultural commissioner, the county farm bureau or a previously established watershed group. The sub-watershed leadership collaborates with other local stakeholders, such as resource conservation districts, UC Cooperative Extension and the federal Natural Resource Conservation Service. The exact structure of the partnerships is different in each sub-watershed, reflecting the unique configuration of networks, political interests, policy expertise, leadership and individual personalities in each area. Regional coordination among the sub-watershed groups is achieved by three main organizations: the Northern California Water Association , Ducks Unlimited and the Coalition for Urban Rural Environmental Stewardship . These organizations ensure professional oversight of the water-quality monitoring program,what is vertical growing and the timely preparation of required documents and reporting of water-quality monitoring results.

The regional coordinators are headquartered in the Sacramento area and serve as a liaison between the Regional Board and producers in the more distant, rural areas of the Sacramento River Valley. These networks of sub-watershed and regional actors represent each of the three pathways for sustainable agriculture. They inform producers about the requirements of the program, opportunities for participation, and appropriate management practices for protecting and enhancing water quality. They are a main source of social capital and trust, and they help build inter agency cooperation as well as encourage producer participation. They encourage cultural change by demonstrating the success of various water-quality programs and practices, as well as providing public awareness about individual producers who are outstanding examples of stewardship. Whether the Conditional Waiver program is viewed as collaborative or regulatory policy, the diffusion networks involved with the SVWQC make a positive contribution to sustainability to the extent that they facilitate producer participation in water-quality management.To examine the role of diffusion networks, we conducted a mail survey of 5,073 producers from nine Sacramento River Valley counties: Butte, Colusa, Glenn, Shasta, Solano, Sutter, Tehama, Yolo and Yuba. The sample list was constructed mainly from agricultural commissioner pesticide-permit lists. The standard Dillman methodology of delivery was used to encourage response. The respondents were divided into a group of known orchard producers and a group of other producers for whom the specific commodities were not known beforehand . A 12-page survey was mailed to growers, which included 68 questions about their views on water quality management, political values and farm characteristics; most of the responses were yes/no or 7-point Likert scales. The orchard respondents received several additional questions about orchard management practices. The survey was administered from November 2004 to February 2005, about 2 years after the introduction of the waiver program. A total of 1,229 producers responded to the survey , including 408 from the orchard group and 821 from the nonspecific group. Except for the analyses of orchard practices , the results presented here apply to the combined 1,229 respondents.

The survey population adequately reflected the diversity of land tenure, operation size, commodity types and operator characteristics in the nine counties. To further validate our survey, we conducted follow-up telephone interviews of mail survey non-respondents in seven of the nine original counties, which targeted 1,078 non-respondents for whom telephone numbers could be found. Of these, 44.7% were determined as owners of irrigated land and thus eligible for the survey, 16.2% were considered ineligible and 39.1% could never be reached. A total of 300 non-respondents were interviewed by telephone, and the results suggest that the mail survey respondents were more likely to own instead of lease their land and to have slightly higher rates of participation in the coalition groups. This means that we do not have a complete picture of the least-engaged producers, and reflects the difficulty of communicating with smaller and part-time producers. However, the survey does sufficiently represent the economically and politically significant segment of producers who will have the most influence on policy decisions and eventually, the behavior and attitudes of less active producers.We asked producers about the number of times they had contacted different organizations in the last year, as well as the average level of trust that they had in these organizations based on an 11-point Likert scale . In the case of the Conditional Waiver, the Regional Board is considered the most important regulatory agency because it has the authority to manage and enforce the program. The diffusion network consists mostly of local agencies that deliver information about policies and practices to individual producers, as well as the regional organizers of the SVWQC. The agricultural commissioners are considered a diffusion agency because despite having formal regulatory duties, they are usually viewed as ombudsmen who help producers comply with pesticide laws. The diffusion network agencies received much higher levels of trust and contact than the regulatory agencies . Trust and contact were also positively correlated. Even diffusion agencies with fairly low levels of contact, such as the California Department of Food and Agriculture and two of the regional coalition organizers , had higher levels of trust than might be expected, given their lower frequency of contact by growers. Just the basic descriptive data about trust and contact shows how the local diffusion network interacts most effectively with farmers with respect to water-quality management.

We conducted a series of regression analyses to estimate how many times a grower would need to have contact with the diffusion network before leading to a change in three dependent variables associated with successful water-quality management: participation in coalition activities; satisfaction with coalition group policies; and the number of orchard BMPs on a particular farm. The participation measure was a count of the number of watershed activities producers had engaged in, varying in intensity from reading brochures to committee membership. The satisfaction measure took the average level of agreement to four questions about coalition effectiveness for addressing water-quality problems, encouraging the participation of other producers, pooling resources and facilitating BMP adoption. The orchard BMP measure was a count of 11 different practices considered to be protective of water quality. To measure the density of network contacts, we counted the number of organizations contacted by the producer from the diffusion network and the regulatory network . The analysis controlled for a range of other variables that are considered by diffusion-of innovation models, which are typically used to predict the adoption of agricultural practices. These variables included the producer’s education level, their operation’s income and the total number of acres farmed . For the non-orchard sample, we measured perceptions about the severity of water-quality problems, the likelihood that agricultural sources are causing a problem, and the availability of information about the coalition groups. Due to non-response on the attitude and belief questions, multiple imputation by chained equations was used to estimate missing data on these variables . For the orchard sample, we asked if the respondent was aware that pesticides have been detected in the Sacramento River and if they have been informed of water-quality management practices .Before reporting the results of the regression analysis, we summarize the rates of practice adoption . The results suggest that adoption rates partly reflect the combination of experience with each practice and the balance between economic risks/costs to crops and environmental protection. For example, some of the conventional pest-management practices, such as basing the time of spraying on weather/wind , have been a part of agricultural research and education since the 1960s, and more is known about how to adapt these practices to specific farm settings to protect water quality while simultaneously controlling pests and reducing overall input costs . Alternative pest-management practices, such as providing beneficial insect habitat ,what can you grow in vertical farms on the other hand, are relatively new and are more complex in terms of their research development and adaptation to on-farm use. There is more uncertainty about their readiness for use, and about balancing their efficacy at reducing pests and associated crop risks with their environmental benefits . Respondents reported moderate adoption rates of runoff-control practices, such as filter strips . These practices are thought to pose few economic risks to crops, but to have fairly clear benefits for reducing the amount of agricultural contaminants entering surface water from dormant-season orchard sprays . An exception is that orchard floor vegetation, depending upon how it is managed, influences orchard temperatures and may increase the potential for freeze damage in orchard crops . Tables 2 and 3 summarize the results of the regression analysis by presenting unstandardized coefficients, which are interpreted as the expected change in the dependent variable for a one-unit change in an independent variable , controlling for the other independent variables.

Diffusion networks have an important influence on all three dependent variables; the estimated diffusion network coefficients are positive and are statistically different from zero in all models . Unlike correlation coefficients, regression coefficients are not constrained to the range between negative and positive one ; their importance must be judged relative to the scales of the variables. To assess their influence on each dependent variable, it is useful to calculate how many additional diffusion network contacts are required to increase the dependent variables by one unit. The fewer the contacts needed, the more power each contact has for changing the relevant outcome. In our survey, we found that the number of contacts needed to change different measures of policy effectiveness was highest for satisfaction with coalition group policies — it takes 20 diffusion network contacts to increase policy satisfaction by 1 point on the 7-point scale. However, the influence of diffusion networks was quite strong for coalition participation and BMP adoption. It took 9.0 additional diffusion network contacts for the adoption of an additional orchard BMP, and 3.7 contacts for another act of coalition participation. Overall, diffusion networks had the strongest influence on coalition group participation, followed by BMP adoption, and weakest for policy satisfaction. Contact with the regulatory network, on the other hand, had zero influence on the three dependent variables. The coefficients for the other independent variables — such as operator characteristics and attitudes and beliefs toward water quality — were largely consistent with classic diffusion-of-innovation models . Producers who thought that agriculture influences water quality and who had information about coalition group practices had higher levels of policy satisfaction. Producers who had more education and higher incomes were more likely to participate, and higher income producers also had implemented more BMPs. Because there was a strong correlation between agricultural income and size of operation, the total-acres variable became significant in regressions that omitted the income variable. This suggests that larger and wealthier operations were more likely to participate in watershed management and to adopt BMPs.The most incongruous finding was that producers who thought that water quality is not a problem were more likely to participate in the coalition group activities, and more-educated growers were less satisfied with coalition policies. This suggests that an important motivation for participation by educated growers was to prevent the implementation of costly new policies for water-quality problems, which many producers perceived to be of lesser importance than other issues, such as urbanization. According to our personal interviews , this type of “policy skepticism” is likely to shift toward problem-solving if water quality monitoring conducted by the coalition clearly establishes a relationship between agricultural practices and water pollution.

Inseparable from a consideration of foodways is a consideration of labor

Throughout history, humans have obtained much of their food, fuel, and technological needs from the gathering of wild plants, horticulture, agriculture, and arboriculture. Certainly, animal products also have been important components of diet and cuisine , and there has been a recent push to integrate archaeobotanical and faunal datasets to gain fully robust understandings of past food ways . Prehispanic residents of north coastal Peru relied on two main domesticates, camelids and guinea pig, or cuy , and they also exploited white-tailed deer , rodents, small snakes, and lizards, as well as marine resources including marine otter , various near and off shore pelagic fish, sharks, rays, molluscs, and coastal seabirds such as cormorant and pelican , on the coast as well as in middle valley sites 1 . Regardless, the contribution of plant foods to Moche Valley diet was and remains substantial. While the abundance of certain plants in the archaeological record may be the result of differential preservation or ecological constraints, evidence of differential plant use between communities is often conditioned by cultural choices. For example, Morehart and Helmke’s comparison of archaeobotanical data from two Late Classic period Maya sites in the upper Belize Valley, an affluent plazuela group and a commoner farmstead, demonstrated that wood procurement and craft production were socially contingent—some households procured wood from the local environment while others obtained higher quality materials through trade, gifts, or tribute. These practices in turn impacted the organization of household labor, including gendered household tasks such as firewood collection. In addition to status, food selection is often enacted to preserve identity and tradition. In her ethnographic study of Salasacan food ways in the Ecuadorian Andes, Corr found that food informed local construction of personhood and Salasacan identity, in contrast to White/Mestizo identity. Contrasts between local/non-local, processed/natural, cultivated/store-bought, and Spanish/Indian foods served to strengthen individual as well as collective identities . In addition to the types and amounts of foods consumed,hydroponic nft system socially-constructed cuisine preferences can be archaeologically evident from distribution patterns across space.

As Hastorf highlights, ethnographic studies have shown that we can see differential spatial patterning of artifacts in storage contexts, food preparation loci, refuse disposal areas, and in or near domestic structures; such patterns are the result of habitual domestic practices. Archaeologists have successfully used spatial analysis of different contexts to examine the intersection of a variety of food-related activities with status, political economy, gender, ritual, and the public/private division . VanDerwarker and Detwiler’s analysis of Cherokee food ways from the Coweeta Creek site revealed that plant food processing took place near townhouses , complicating assumptions about gendered segregation of space in protohistoric Cherokee communities. Based on her analysis of faunal data from Neolithic Çatalhöyük in central Anatolia, Twiss suggests that each household had separate private and communally advertised identities; whereas certain feast foods were placed publically to announce particular identities to others , quotidian food stores were placed out of sight in private storage rooms on the sides of individual houses. I discuss the intersection of food and social space further in Chapter 5. Silliman explicitly problematizes anthropological conceptions of labor, asserting that a useful definition places it in an economic framework encapsulated within social relations. Citing Wolf , Silliman draws on Marx’s distinction between work and labor: work represents the activities of individuals or groups expending energy to produce, but labor represents a social phenomenon, carried out by human beings bonded to one another in society. Labor’s significance for the anthropology of power and social relations is its ability to be appropriated and enforced as well as its varying impacts on men, women, and children in households and communities. Within prehistoric archaeology, labor primarily has been approached through studies of political economy , elite control of labor and surplus , and craft specialization . Studies in historic archaeology have addressed the relationships between conscripted labor and tribute, material life, and social relations in colonial households, missions, rancherias, and plantation settings.

Many traditional Andean societies considered the control of labor to be the foundation of social power, rather than possession of material wealth or commodities . With respect to the Inka, all categories of people were categorized into different classes on the basis of their productive capabilities. As described by chronicler Guaman Poma de Ayala , the Inka empire “separated the Indians into ten classes to be able to count them, in order that they were employed in work according to their capacity and that there were no idle people in this reign.” Given the emphasis on labor relations noted in the ethnographic and ethnohistoric literature in the Andes , a deeper consideration of ancient labor dynamics seems critical to understanding Andean political economies and shifts toward increasing sociopolitical complexity and inequality. In his studies of laborers in Franciscan mission contexts and Mexican California ranchos in Alta California, Silliman employs explicit practice-based approaches to labor . According to Silliman , labor is more than simply an economic or material activity; rather, it should be conceived of “as social action and as a mechanism, outcome, or medium of social control and domination.” As Hastorf illuminates, the “places where people complete daily tasks are the nexus of grumbling, confrontation, as well as celebration and awe.” Highlighting labor as practice considers how labor regimes are implemented and then carried out on a daily basis; how labor can be a highly routinized set of practices; and how labor tasks and scheduling are experienced bodily and socially. The procurement, production, processing, and consumption of plant foods in households and for larger community events certainly require a unique set of social practices that leave archaeological signatures. Hastorf outlines a range of labor activities related to these three elements of food ways, from production to processing to consumption. Production requires preparing soil, planting, fertilizing, mulching, recultivating, watering, weeding, and collecting/harvesting, all of which may require reaping, beating, plucking, uprooting, or furrowing, which often occurs more than once during a single growth cycle. Production activities require careful attention to seasonality and scheduling, with regards to planting, crop management/maintenance, and harvesting.

With the exception of seed storage, tool production, and the generation of domestic compost, activities related to production take place in fields or home gardens where crops are grown. Archaeologists rarely investigate fields themselves to find evidence of crop production ; rather, they make inferences about production activities based on patterns of field crops, tree crops/other fruits, and wild weed seeds that make their way back to domestic habitation sites. The issue of agricultural intensification looms large in this dissertation. Prehistoric agricultural intensification would have involved increased labor investment along the entire set of tasks associated with farming: canal construction and maintenance, terracing, fertilizing, weeding, mulching, harvesting, processing, etc. Ancient farmers would have paid strict attention to seasonality and scheduling of planting, tending, and harvesting; as a result, changes in agricultural rhythms associated with intensification would have conditioned daily practices related to crop production and processing. Processing relates to a range of activities associated with preparation for immediate consumption or storage, in addition to preparing plant parts for their use as shelter, containers, tools, clothing, and so forth . These activities include threshing, winnowing, milling, leaching, grinding, etc., along with cooking activities such as parching, roasting, toasting, boiling, baking, etc. Most of these activities take place within habitation areas and require the use of various material media as well as movement through various spaces, public and private, that provide opportunities for social interaction or restrictions on visibility and community integration . Archaeobotanical data can be used to indicate the spatial location of on-site processing activities , and can also inform on processing that occurs off-site, near fields at times of harvest. Consumption, the actual intake of foodstuffs, can be reflected in food preparation and cooking strategies . In the absence of direct evidence of consumption in the form of dental calculus, coprolites, or bone chemistry data, consumption practices can be inferred via food remains within hearths, types of cooking and serving vessels, heating techniques , starch grain residues and phytoliths on cooking vessels,nft channel and scatterings around hearths and middens where food was prepared and leftovers were discarded.

Some of the literature focused on the political economy of expansionist states considers the role of food in terms of household labor organization and gender hierarchies. Andean researchers have questioned whether state development implied increases in women’s labor and changes in women’s social status . Important approaches also have been developed in Mesoamerican scholarship for considering these issues . For example, Brumfiel argued that the Aztec state increased tribute demands on households, requiring family members to spend more timing engaging in labor away from the household. She argues that women’s labor investment in food processing increased with the shift from the cooking of stews and porridge to the preparation of portable but more time-consuming tortillas. Bray and Jennings outline the enormous labor input for chicha brewing, concluding that labor investment in chicha production would have been central to Andean leaders’ ability to organize large-scale feasts. Gero and Jennings and Chatfield suggest that large-scale feasting impacted women’s status, arguing that as feasting became more centralized and production more specialized, women lost control and influence formerly held through domestic production and distribution within a household’s social network. This labor endeavor had different consequences with respect to gender and status is terms of consumption as well. In several cases, including the Inka occupation of the Upper Mantaro Valley of central Peru , the Tiwanaku occupation of Moquegua in southern Peru , and the Gallinazo occupation of Cerro Oreja in the Moche Valley , bone chemistry studies and oral health indicators suggest that men had higher maize intakes, likely a result of participation in public commensal events involving chicha. In contrast, these differential consumption patterns led to poorer dental health for women; Andean scholars have reported gendered divisions of labor in which females are responsible for masticating maize kernels for chicha production, resulting in higher dental caries rates among women . In certain parts of the Andes and Amazon it has been documented ethnographically that to sweeten chicha, women chew the maize and spit the masticated mixture into the pot where the chicha is then boiled . These differences would not result in differences in male and female stable isotope ratios, as women were not necessarily consuming the maize; depending on the location in the Andes, chicha can be made from a variety of products, and chewing and spitting is not always part of the preparation. Based on her analysis of bioarchaeological data from the Salinar and Gallinazo burials from the site of Cerro Oreja in the Moche Valley, Gagnon suggests that the men of Cerro Oreja were increasingly drafted by elite into work parties where they were provisioned with meat or marine resources, whereas women and children tended agirucltural fields and consumed the staple crops they produced and processed, resulting in different gendered diets and dental health.Hastorf documents similar patterns in for the Sausa people under Inka hegemony in the Upper Mantaro Valley; stable carbon and nitrogen isotopic values suggest that while women were producing more chicha, only certain men in the Sausa community consumed maize in supra household community events, and men also had greater access to meat. While women increased their labor in terms of chicha preparation, they did not participate in supra household consumption. In the Andes, chicha drinking reinforces social hierarchies; social status is marked by the order in which one is served chicha, and whether one acts as giver or receiver . Dynamics in which women prepared and served chicha that was then consumed by men thus has implications for status as well as traditional gender roles in Andean societies. While a wealth of literature has been devoted to feasting, work parties, etc., less often considered in discussions of political expansion, gender, and labor is a consideration of everyday labor associated with farming, foraging, and processing of foodstuffs for daily household needs in addition to supra household community. Feasts and daily meals are not necessarily mutually exclusive —when distinguishing between feasts and daily meals, often it is not the type of plant that differs but the way in which it was prepared, presented, or combined with other foods, or in terms of the sheer quantity in which it was used and/or deposited.

The sanitary work conditions variable serves as a proxy for other jobrelated dangers

Since such disorders are similar to those caused by intestinal parasites that workers could bring from Mexico or that could result from poor sanitation in a worker’s living environment, we used statistical techniques to isolate the effects of poor sanitation in the work environment. Even if poor sanitation leads to physical discomfort, the health problems may not have a a significant impact on an individual’s ability to work productively. If these health problems are debilitating, individuals suffering from them should be more likely to be on welfare or unemployment compensation or to have lower earnings. This hypothesis is tested in a model where the probability of being in a welfare program and earnings are a function of personal characteristics and poor health. The next section discusses the survey and the data set utilized in this study. The following section, describes the estimation techniques used. Next, three probit equations for gastrointestional disorders, repiratory problems, and muscular problems conditional on measures of demographic characteristics, living environment, and work environment are presented. Conditional on these health measures, the probability of receiving welfare or unemployment compensation is calculated. Next, the effect of these health measures on earnings is examined. The paper concludes with a discussion of the policy implications of these findings.Our data come from Mines and Kearny’s 1981 survey, “The Health of Tulare County farm workers,” sponsored by the Tulare County Department of Health. Interviewers chosen to administer the questionnaire were fluent in colloquial Spanish and either had farm work backgrounds or had extensive familiarity with farm workers.

This farm worker population largely consists of Mexican-born immigrants with varying degrees of experience with and assimilation into American society. While a large segment of the population the long-term settled immigrants have relatively stable living and employment conditions,vertical garden indoor system many of the more recent immigrants do not. The recent immigrants are primarily young Mexican families cr “lone Mexican males” . These workers are usually hired by crew leaders or foremen Who work for several growers, associations, or packing houses. As a result, the immigrants frequently change from job to job on a daily or weekly basis. Many workers frequently switch crew leaders as well during the season. These mercurial employment conditions are often associated with informal housing arrangements including make-shift shacks, public and private labor camps, and overcrowded apartments in small towns. Many such residences provide inadequate sanitation and food preservation facilities.Many of the survey population are foreign nationals without visas. The threat of apprehension by the Immigration and Naturalization Service induces these workers to be wary of government agencies. Thus, even when such workers are located, they are reluctant to provide comprehensive information to government officials about their employment or legal status. Moreover, most county and other government officials these immigrants meet are non-Hispanic and do not speak Spanish . As a result, more general government surveys often overlook this farm worker population, which is probably exposed to greater health risks than other groups. This study is restricted to the 367 farm workers who are the reported head of their household for whom no data are missing on key variables . Table 1 presents the means and standard deviations and formal definitions for the variables used in the analysis. The average worker is a 34 year old male, has lived in Tulare County for nearly 9 years, has access to a refrigerator and water at home, consumes nearly 8 beers a week and 5 cigarettes, has travelled to Mexico to visit his family 1.3 times in the last 5 years, has an observed family of 4 people, has a 1 in 5 chance of having been deported in the last year, is probably a harvester of grapes or citrus, and has a 30% chance that he lives in either a field or a public or private camp. Of these workers, 57% do piece work, 25% receive unemployment compensation, and 17% of their families receive welfare payments. Workers reported whether or not they exhibited various acute or chronic health problems at least once a month, and these self-reported illness are not separately confirmed. These problems are coded as binary dummy variables. As a result, ~ach of these health variables captures both serious and relatively minor problems.

The probability that a worker reports a GI problem is 17%; a respiratory problem. 26%; and a muscular problem. 50%. Although the survey only recorded the presense or absence of a job site toilet, this variable probably represents the effects of the lack of toilets, fresh drinking water, and water for washing hands. That is, the lack of toilets is believed to be highly correlated with the lack of water for drinking and washing. Other statistically significant variables also have substantial effects on the probability of having a 01 disorder. Compared to the typical worker, a female worker’s probability of having a 01 disorder is 127% higher than a male’s . Interviewers reported, however, that females were more likely to complain about both major and minor illnesses than men, so that this difference may be due to reporting difference rather than difference in health. Similar results were found in Wisconsin . Not having a refrigerator tripled the probability . An individual who lives in a public camp has a 325% higher probability of 01 disorders. A worker who lived in Mexico six months ago has a 136% higher probability of disease. The likelihood-ratio test statistic that none of the household amenities matter equals 8.46 and hence that hypothesis is rejected at the 0.05 level. Since there are only 35 households headed by a female or lacking a refrigerator and these variables have large coefficients, the health equations were re-estimated dropping those families. The resulting equations were virtually identical in terms of the effects of on the remaining variables on the probability of health problems and the asymptotic t-statistics. Based on this weak robustness test, including these two variables and the entire sample does not qualitatively alter the probit estimates. The elasticity of the probability with respect to the number of times an individual has been deported in the last year, at the sample means, is -0.16. The sign of this variable is puzzling. Other variables that are significant at the 0.10 level include the number of times one visited his or her family in Mexico in the last five years, which has the expected positive effect, and whether one is a non-Mexican foreigner, which has a positive effect. This equation correctly predicts the health of 84% of the sample, but is over-likely to predict that one does not have the disorder. This over-prediction of health is not surprising since only 17% of the sample have GI problems, and probits typically have difficulty predicting relatively rare eventsthat is, events on the tail of the distribution. Four pseudo-R2 measures and Hensher and Johnson, which range from 0.10 to 0.17, are reported in Table 2. least squares interpretation. McFadden has suggested an alternative measure of goodness of fit for an estimated dichotomous model called a prediction success index. This index compares the proportion successfully predicted for an alternative compared to that which would be predicted by chance.

This model’s prediction success index is 0.12. These results suggest that being exposed to a bacteria, parasite, or virus in lexico; lacking sanitation at work; lacking refrigeration at home; other living and working conditions; and gender are the primary factors Only two factors appear to explain respiratory problems. First. and most statistically significant , is whether the individual is a lone Mexican male worker . Nearly half of the lone Mexican male workers, who comprise 29% of the sample, reported respiratory problems, compared to 20% of the rest of the sample. The corresponding figures for GI problems are 22% versus 15%; and for muscular problems, the figures are 60% versus 47%. These lone males are the workers most likely to have recently immigrated from Mexico. They have lived in Tulare County for an average of only 3.4 years compared to 10.5 years for the rest of the sample. Controlling for other factors, a lone Mexican male has a 46.8% probability of having a respiratory problem compared to 15.4% for other males . The second factor that is statistically significant is whether the individual lives in a public camp. Compared to a worker with average characteristics,mobile vertical grow racks someone who lives in a public camp is 83% more likely to have respiratory problems.It was not a statistically significant determinant of respiratory problems, however. The pseudo-R2 measures vary between 0.11 and 0.18. The percentage of correct predictions is 73%. while McFadden’s prediction success index is0.13.As an experiment, we added to the basic specification crop and occupation variables. The coefficient on spraying is positive with an asymptotic tstatistic of 1.86, so that it is statistically significantly different from 0 at the 0.10, but not the 0.05 level. No other occupational or crop coefficient had an asymptotic t-statistic higher than 0.9. The explanatory power of that probit was about the same as the basic specification. Since this extended model produces similar results to the basic model, none of the crop and occupational variables have asymptotic t-statistics that are different from zero at even the 0.10 level in the other equations, and these variables may be endogenous, only the basic equations are reported.Respiratory problems, then, are primarily associated with lone Mexican males, but not with any particular living or working condition except, possibly, spraying and public camps. The factors that put lone Mexican males at greater risk of respiratory problems than others are unknown. Muscular Problems The results indicate that muscular problems have six statistically significant determinants. The number of deportations has an elasticity at the means of 0.05, while the number of trips to visit relatives in Mexico has an elasticity at the means of 0.08.

Presumably these variables are correlated with being a worker who changes employers frequently and who lives in rough conditions, not otherwise measured. The same explanation of frequent employment changes can be applied to the lone Mexican male variable , whether one lived in Mexico six months previously , and the public camp variable as well.Finally, males are 41% less likely to have muscular problems. This variable may reflect physiological differences, since males are more likely to have jobs involving heavier lifting. Females may do jobs that involve more bending over and may suffer from muscular problems relating to giving birth to and raising children or they may report problems more frequently than men. Again, the sanitary work conditions variable was included as a proxy for other dangers at the workplace. However, it did not have a statistically significant effect. The pseudo-R2 measures range between 0.10 and 0.17. The percentage correctly predicted is 64.6, while McFadden’s prediction success index is 0.13. Apparently workers who change jobs often suffer from more muscular problems, although that factor is only indirectly measured in our sample. Presumably they work at jobs that involve more muscular strain or live in worse conditions that are not measured explicitly by the sample questions. Again, no particular crop or activity is statistically significantly related to muscular problems. Thus, individual characteristics and home and job site conditions have statistically significant effects on three health problems. It is possible, however, that these health problems do not have a significant impact on an individual’s ability to work productively. If these health problems are debilitating, individuals suffering from them should be more likely to be partially or totally unemployed or to be less productive on the job. These effects should be reflected in higher probabilities of being on welfare or unemployment compensation or to have lower earnings.We first test the hypothesis that ill-health contributes to higher participation in welfare programs and then the earnings effects are considered. Both welfare and unemployment compensation are modeled as functions of personal characteristics and the three health problems. The sample includes a disproportionate number of employed agricultural workers, so the following results probably underestimate the full effect of ill-health for the population at large. Further, since only three health problems are studied, all ill-health effects are not captured. Indeed, severe health problems were excluded because their effects are self-evident. Since our database does not contain information about the eligibility of individuals or families for the programs, the participation rates examined in the following equations reflect the combined effects of being eligible and applying to the programs.