Category Archives: Agriculture

The connections between equilibrium outcomes in these various scenarios are established

In contrast, when the intensity of the contest among sites is beyond a certain threshold, competition tends to rapidly increase the fragmentation of published news: as the number of competing publishers increases, more and more a priori unlikely topics are reported resulting in a large diversity of published topics . This result is reminiscent of the emergence of “funny lists” and “heartwarming videos” in the news mentioned by the Financial Times . Our analysis extends to pure- and mixed-strategies and distinguishes between cases with small or large number of competing publishers.Next, in a model with firm asymmetries, we find that when some firms have better technology to forecast the popularity of topics, then, surprisingly, the overall diversity of news published by the remaining firms declines as these firms tend to take refuge in publishing ‘safer’ topics. When a subset of firms have extra revenue from a published ‘hit’ from loyal users then these ‘branded’ publishers tend to be conservative in their choice of topics as their loyal customer base represents ‘insurance’ against the contest. In contrast, the diversity of news published by unbranded outlets increases as unbranded publishers tend to avoid branded ones by putting more weight on a priori unlikely stories. These results are consistent with anecdotal evidence in the news industry where traditional news outlets are more conservative in their reporting whereas new entrants do not shy away from controversial stories. The findings are also conform to the broadly observed increase of diversity in the public agenda by communication theorists.In a final analysis,vertical planting tower we consider endogenous success probabilities. It is widely accepted that the media often ‘makes the news’ in the sense that a topic may become relevant simply because it got published. Interestingly, such a dynamic has an ambiguous effect on the diversity of published topics. If the contest is very strong then it results in a concentrated set of a priori likely topics. When the contest is moderate then the diversity of topics may be higher depending on the number of competing outlets.

The article is organized as follows. In the next section, we summarize the relevant literature. This is followed by the description of the basic model and its analysis where we first present a variety of results concerning symmetric competitors. Next, we extend the model to explore the impact of asymmetries across firms. Our last extension considers the case of endogenous success probabilities. The article ends with a discussion of the results and their applicability to other contexts. To facilitate reading, all proofs are relegated to the Appendix. The topic of this article is generally related to the literature on agenda setting for an excellent recent review that studies the role of media in focusing the public on certain topics instead of others. It is broadly believed that agenda setting has a greater influence on the public than published opinion whose explicit purpose is to influence the readers’ perspective. As the famous saying by Bernard Cohen goes: “The media may not be successful in telling people what to think but they are stunningly successful in telling their audiences what to think about”. The literature examines the mechanisms that lead to the emergence of topics and the diversity of topics across media outlets. In particular, McCombs and Zhu show that the general diversity of topics as well as their volatility has been steadily increasing over time. The general focus of our article is similar: we show that the nature of competition is an important mechanism affecting the diversity of public agenda. Agenda setting is also addressed in the literature studying the political economy of mass media for an excellent review.The standard theory states that media coverage is higher for topics that are of interest for larger groups, with larger advertising potential, and when the topic is journalistically more “newsworthy” and cheaper to distribute. Although there is little empirical evidence to support , the other hypotheses are generally supported and Snyder and Stromberg , among others. Hypotheses is particularly interesting from our standpoint. Eisensee and Stromberg show that the demand for topics can vary substantially over time. For example, sensational topics of general interest may crowd out other ‘important’ topics that would be covered otherwise. This supports the general notion that media needs to constantly forecast the likely success of topics and select among them accordingly.

Our main interest is different from this literature’s as we primarily focus on media competition as opposed to what causes variations in demand. Taking the demand as given, our goal is to understand how the competitive forces between media firms influences the selection and diversity of topics, which then has a major impact on the public agenda. As such, the article also relates to the literature on media competition where strategic behavior influences product variety. Early thoretical work by Steiner and Beebe on the “genre” selection of broadcasters explains cases of insufficient variety provision in an oligopoly. Interestingly, they show that although certain situations lead to the duplication of popular genres , other scenarios may lead to a “lowest common denominator” outcome where no consumer’s first choice of genre is ever served. A good discussion of these models and their extensions can be found in Anderson and Waldfogel . Our work is different from this literature in two important ways: we do not have consumer heterogeneity and, we do not rely on barriers to entry to explain limited variety. In fact, we study variety precisely when these factors’ importance is greatly diminished. On the empirical side, research on competition primarily focuses on how media concentration affects the diversity of news both in terms of the issues discussed in the media as well as the diversity of opinion on a particular issue. For example, George and Oberholzer-Gee show that in local broadcast news, “issue diversity” grows with increased competition even though political diversity tends to decrease. Franceschelli studies the impact of the Internet on news coverage, in particular the recent decrease in the lead-time for catching up with missed breaking news. He argues that missing the breaking news has less impact, as the news outlet can catch up with rivals in less time. This might lead to a free-riding effect among media outlets, where there is less incentive to identify the breaking news. Both of these articles have consistent empirical findings with our results/assumptions. In terms of the analytical model, we rely on the literature studying competitive contests among forecasters. For example, Ottaviani and Sorensen use a similar framework to model competition among financial analysts. Our model is different in that we explore in more detail the structure of the state space, we generalize the contest model by considering all possible prize-sharing structures and extend it in a variety of ways, most notably by analyzing asymmetries across players. This article studies competition among news providers who compete in a contest to publish on a relatively small number of topics from a large set when these topics’ prior success probabilities differ and when their success may be correlated.

We show that the competitive dynamic generated by a strong enough contest causes firms to publish ‘isolated’ topics with relatively small prior success probabilities. The stronger the competition , the more diverse the published news is likely to be. Applied to the context of today’s news markets characterized by increased competition between firms,vertical hydroponic farming new entrants and reduced customer loyalty, we expect a more diverse set of topics covered by the news industry. Although direct evidence is scarce, there seems to be strong empirical support for the general notion that the public agenda has become more diverse over time while also exhibiting more volatility McCombs . This general finding is consistent with our results. Although diversity of news may generally be considered a good thing, agenda setting, i.e. focusing the public on a few, worthy topics maybe impaired by increased competition. In a next step, we explore differences across news providers and find that branded outlets with a loyal customer base are likely to be conservative with their choice of reporting in the sense that they report news that is a priori agreed to be important. Facing new competitors with better forecasting ability also makes traditional media more conservative. In sum, if the public considers traditional media and not the new entrants as the key players in agenda setting, then increased competition may actually make for a more concentrated set of a priori important topics on the agenda. It is not clear however, that traditional news outlets can maintain forever their privileged status in this regard. Some new entrants have managed to build a relatively strong ‘voice’ over the last few years. We also explore what happens when the success of news is endogenous, i.e. if the act of publishing a topic ends up increasing its likely success. Interestingly, we find that an excessively strong contest tends to concentrate reporting on topics with the highest a priori success probabilities. We also find that the number of competitors has a somewhat ambiguous effect on the outcome. If there are too few or too many competing firms then, again agenda setting tends to remain conservative in the sense of focusing on the a priori likely topics. These results also resonate to anecdotal evidence concerning today’s industry dynamics. Our analysis did not consider social welfare. This is hard to do as it is not clear how one measures consumer surplus in the context of news. Indeed, the model is silent as to what is consumers’ utility when it comes to the diversity of news. Although policy makers generally consider the diversity of news as a desirable outcome, a view that often guides policy and regulatory choices, it is not entirely clear that, beyond a certain threshold, more diversity is always good for consumers. As mentioned in the introduction, the media does have an agenda setting role and it is hard to argue that every topic equally represented in the news is a useful agenda to coordinate collective social decisions . Nevertheless, our goal was to identify the competitive forces that may play a role in determining the diversity of news in today’s environment increasingly dominated by social media. Our analysis indicates that these forces do not necessarily have a straightforward impact on diversity. The generalized contest model presented has implications for other economic situations that may be well-described by contests. In this sense, our most relevant results are those that describe the outcome as a function of the reward-sharing patterns across winners. Indeed, we characterize all such patterns with a simple parameter, r, and show that depending on r there are only three qualitatively different outcomes leading to vastly different firm behaviors. Different r’s may characterise different contexts. For our case a finite, albeit varying r seemed appropriate and r = is less likely. In the case of a contest describing R&D competition, r = is quite plausible. Conversely, the case of r = 0 may well apply to contests among forecasters whose reward might be linked more closely to actually forecasting the event and less to how many other forecasters managed to do so. Our analysis of the case with a small number of firms may also be useful in particular situations ; we show that this case is tractable and shares many characteristics with the case involving many players. An important insight from our analysis is that contest models need to be carefully adjusted to the particular situations studied. Our framework can be extended in a number of directions. So far, we assumed a static model, one where repeated contests are entirely independent. One could also study the industry with repeated contests between media firms where an assumption is made on how success in a period may influence the reward or the predictive power of a medium in the next period. A similar, setup is studied with a Markovian model by Ofek and Sarvary to describeindustry evolution for the hi-tech product categories. Finally, our article generated a number of hypotheses that would be interesting to verify in future empirical research.

Three RCTs and two non-RCTs reported on culturally sensitive or culturally adapted interventions

All studies employed the same widely used and validated screening instrument, the Patient Health Questionnaire-9 , to determine baseline depression diagnosis. However, there was wide variability in the measures used to define study outcomes. To determine depressive symptom improvement, six of nine studies used the PHQ-9, two studies used the Hopkins Symptom Checklist Depression Scale , and one study used the Hamilton Rating Scale for Depression , the Clinical Global Impression Severity Scale , and the Clinical Global Impression Improvement Scale . One study reported that researchers used their own translation of the PHQ-9 to Chinese, which had been validated in a prior study.Other studies did not specify whether they used validated translations or translated their own instruments.All studies adequately described the interventions and the control conditions .Two studies reported post-intervention follow-up and included outcomes a year after the intervention had ended .Not all studies reported how frequently care managers contacted patients in the intervention group during follow-up .The mean age ranged from 34.8 to 57 years across studies, and 1166 of 4859 participants were male. Among the nine studies, 2679 participants had LEP. Most studies focused on Latino immigrants living in the United States , with Spanish as the preferred language3; only two studies included Chinese and Vietnamese immigrants. The majority of LEP participants spoke Spanish. One hundred and ninety-five patients with LEP spoke Mandarin, Cantonese, or Vietnamese . Two studies had poor characterization of participant languages, noting that many spoke BAsian languages,^ and citing only clinic language demographics . In two studies reporting that patients preferred a non-English language,vertical garden indoor the degree of English language proficiency was not described. Three-quarters of participants were recruited from general primary care and had a variety of medical conditions. Other participants were recruited into the studies for specific comorbidities .While intervention details were not always fully described, eight of nine studies employed bilingual care managers for the delivery of care in the collaborative care model.

The ninth study did not explicitly mention how the intervention was delivered to patients with LEP.No studies reported on the use of interpreters. These five studies explicitly tailored their interventions to different cultural groups. The two RCTs and one non-RCT serving Spanish-speaking patients, all conducted by the same research group, culturally tailored the collaborative care model by adapting the intervention materials for literacy and for idiomatic and cultural content. They further included cultural competency training for staff and employed bilingual staff to conduct the intervention.The remaining studies mentioned adding a cultural component to the collaborative care model with the goal of serving Asian immigrants with traditional beliefs about mental illness. One study further adapted the psychiatric assessment for cultural sensitivity.Four of five RCTs reported on change in depressive symptoms; none reported outcomes by preferred language group. Three RCTs reported that the proportion of patients who experienced a ≥ 50% reduction in depressive symptoms score was 13% to 25% greater in the intervention arm than in usual care .The last RCT, Yeung et al., reported no statistically significant difference between treatment groups at 6 months44; however, the investigators noted availability and high uptake of psychiatric services in both study arms . Three of these four RCTs included cultural tailoring of their interventions.Two RCTs reported on receipt of depression treatment and treatment preferences. In one RCT, 84% of patients treated in the collaborative care intervention received depression treatment , compared to only 33% of patients in the enhanced usual care arm, over 12 months of follow-up. Another RCT focused on depression treatment preferences.Using conjoint analysis preference surveys, this study found that patients preferred counseling or counseling plus medication over antidepressants alone, and that patients preferred treatment in primary care rather than in specialty mental health care. Patients in the collaborative care intervention group were much more likely to receive their preferred treatment at 16 weeks than were patients in usual care . However, this study also found that English speakers in both groups were more likely to receive their preferred treatment modality than their Spanish speaking counterparts .

One non-RCT study46 found that 49% and 48% of patients reported improved depressive symptoms at 6 and 12 months, respectively, among study participants treated with collaborative care. The two studies that reported outcomes by preferred language found significant differences between English- and Spanish-speaking patients. Bauer et al. found that Spanish language preference was associated with more rapid and greater overall improvement , when compared to English preference, despite not being associated with receipt of appropriate pharmacotherapy.Similarly, Sanchez et al. found that Spanish-speaking Hispanic patients had significantly greater odds of achieving clinically meaningful improvement in depressive symptoms at 3-month follow-up than did non-Hispanic whites .In contrast, Ratzliff et al. found similar treatment process and depression outcomes at 16 weeks among three groups treated with collaborative care: Asians treated at a culturally sensitive clinic, Asians treated at a general clinic, and whites treated at a general clinic .Furthermore, the study did not have a usual care control group to enable evaluation of the intervention.Despite the existence of effective treatment, depression care for patients with LEP is challenging for both patients and clinicians, and better models of care are needed. In a systematic review of the current literature on outpatient, primary care based collaborative care treatment of depression, we found that collaborative care delivered by bilingual providers was more effective than usual care in treating depressive symptoms among patients with LEP. The systematic review revealed important limitations in the current evidence base. The review was limited by the low number of studies , heterogeneity of study outcomes and definitions, and a lack of data on use of language access services. However, the randomized controlled studies were consistent in treatment effect size, as three of four high-quality RCTs found that 13%–25% more patients reported improved depressive symptoms when treated with collaborative care compared to usual care; the fourth had unusually high rates of treatment in the comparison arm and found no difference between groups.This is consistent with prior systematic reviews of collaborative care treatment.

Review of two cohort studies that reported outcomes by preferred language found similar-sized improvements as 10% and 27% more Spanish-speaking patients had improved depressive symptoms during 3 months of follow-up when treated with collaborative care, indicating that patients with LEP may benefit as much as, if not more than, English-speaking patients treated with collaborative care.In short, the collaborative care model—with its emphasis on regular screening, standardized metrics, validated instruments, proactive management, and individualized care, and when adapted for care of LEP patients with depression via the use of bilingual providers—appears to improve care for this patient population. Yet while the collaborative care model has performed well in research studies, many questions remain for wider implementation and dissemination in systems caring for patients with LEP. To help guide the dissemination of an effective model of collaborative care for patients with LEP, researchers will need to be more specific in detailing the language skills of participants and any cultural tailoring and adaptations made to the model to serve specific populations, as we found that race and ethnicity are often conflated with language in these studies,vertical garden indoor system and that preferred language and degree of English language proficiency is not always made explicit. Language barriers may increase the possibility of diagnostic assessment bias, diagnostic errors, and decreased engagement and retention in depression care.It is important to note that most studies employed bilingual staff; language concordance may be particularly important when dealing with mental health concerns, as it is associated with increased patient trust in providers, improved adherence to medications, and increased shared decision-making.Furthermore, the collaborative care model may have been addressing cultural barriers to care beyond linguistic barriers. While a few of the studies culturally adapted and modified their collaborative care model and their psychiatric assessments, these adaptations were not addressed in detail and may be difficult to replicate in other settings. Best practices for culturally adapting collaborative care for patients with LEP have yet to be defined. Further research is also needed to more rigorously ascertain the effectiveness of cultural versus linguistic tailoring on the effectiveness of collaborative care in LEP groups. Additionally, given the evidence that depression in racial and ethnic minorities and patients with LEP often goes unrecognized,efforts will be needed to make sure these groups are systematically screened for depressive symptoms and referred for care in culturally sensitive ways. One large implementation study in the state of Minnesota found a marked difference in enrollment into collaborative care by LEP status.Of those eligible for a non-research-oriented collaborative care model, only 18.2% of eligible LEP patients were enrolled over a 3-year period, compared to 47.2% of eligible English-speaking patients . Similarly, Asian patients were underrepresented in studies and likely in collaborative care programs. Yeung et al. reported that the majority of Chinese immigrants with depression were under-recognized and under treated in primary care, as evidenced by the fact that only 7% of patients who screened positive for depression were engaged in treatment in primary care clinics in Massachusetts.Referral processes for collaborative care may also need to be improved for patients with LEP.

The reasons for differences in enrollment by LEP status in collaborative care programs remain poorly elucidated and likely include patient-, provider-, and systems-based factors. However, these results suggest that without targeted efforts to screen, enroll, and engage patients with LEP, collaborative care models may only widen mental health disparities for such patients. Studies that examine implementation and sustainability of the collaborative care model are needed. This review has a number of limitations. We may have missed studies where language and participant origin were not adequately described. Additionally, as has been noted in prior systematic reviews of RCTs of collaborative care, participant and provider blinding would not have been feasible, due to the nature of the interventions.Other limitations include the variability in study duration and outcome assessment, making direct outcome comparison difficult. Finally, of the nine studies included in this review, five were conducted in Los Angeles, CA . This may limit the generalizability of our results.Circadian rhythms arise from genetically encoded molecular clocks that originate at the cellular level and operate with an intrinsic period of about a day . The timekeeping encoded by these self-sustained biological clocks persists in constant darkness but responds acutely to changes in daily environmental cues, like light, to keep internal clocks aligned with the external environment. Therefore, circadian rhythms are used to help organisms predict changes in their environment and temporally program regular changes in their behavior and physiology. The circadian clock in mammals is driven by several interlocked transcription-translation feedback loops. The integration of these interlocked loops is a complicated process that is orchestrated by a core feedback loop in which the heterodimeric transcription factor complex, CLOCK:BMAL1, promotes the transcription of its own repressors, Cryptochrome and Period as well as other clock-controlled genes . Notably, there is some redundancy in this system as paralogs of both PER and CRY proteins participate in the core TTFL. In general, these proteins accumulate in the cytoplasm, interact with one another, and recruit a kinase that is essential for the clock, Casein Kinase 1 δ/ε , eventually making their way into the nucleus as a large complex to repress CLOCK:BMAL1 transcriptional activity. Despite this relatively simple model for the core circadian feedback loop, there is growing evidence that different repressor complexes that exist throughout the evening may regulate CLOCK:BMAL1 in distinct ways. PER proteins are essential for the nucleation of large protein complexes that form early in the repressive phase by acting as stoichiometrically-limiting factors that are temporally regulated through oscillations in expression. As a consequence, circadian rhythms can be disrupted by constitutively over expressing PER proteins or established de novo with tunable periods through inducible regulation of PER oscillations. CK1δ/ε regulate PER abundance by controlling its degradation post-translationally; accordingly, mutations in the kinases or their phosphorylation sites on PER2 can induce large changes in circadian period, firmly establishing this regulatory mechanism as a central regulator of the mammalian circadian clock. CRY proteins bind directly to CLOCK:BMAL1 and mediate the interaction of PER-CK1δ/ε complexes with CLOCK:BMAL1 leading to phosphorylation of the transcription factor and its release from DNA as well as acting as direct repressors of CLOCK:BMAL1 activity by sequestering the transcriptional activation domain of BMAL1 from coactivators like CBP/p300.

The chirality reversal field can be almost halved when a short-pulsed field is applied

The analysis indicates that such a large reservoir acts as a potential evaporating surface that decreases the local surface temperature, and cools the entire atmospheric column, decreasing upward motion, resulting in sinking air. This sinking air mass causes low level moisture divergence, decreases cloudiness, and increases net downward radiation, which tends to increase the surface temperature. However, the evaporative cooling dominates radiative heating, and resulting in a net decrease in surface and 2 m air temperature. The strong evaporation pumps moisture into the atmosphere, which suggests an increase in precipitation, but the moisture divergence moves this away from the TGD region with no net change in precipitation. The two processes, increased latent heating with surface cooling, and decreased cloudiness with increased downward solar radiation, are opposing feed backs that are dominated here by the area-mean surface cooling effect. It is not clear if this holds true for other times of the year when the mean Tmax is lower and cloudiness may be higher. Furthermore, the impacts on the local monsoon flow, precipitation intensity, and frequency, have not been studied in this initial investigation. However, these relative changes are significant and will likely have an impact on local ecosystems, agriculture, energy, and the population. Simulations at 10km are not sufficiently fine enough to determine the full extent of this sensitivity and, hence, 1 km multi-year simulations will be needed. Amagnetic vortex state1,2 is a ground state of a magnetic nanostructure that consists of a perpendicularly magnetized core and in-plane curling magnetizations around the core . Because of its importance in fundamental physics, research on the vortex state is an important emerging topic in magnetism studies and it has a high potential for application in high-density data storage devices. A magnetic vortex state is energetically fourfold degenerated, which is determined by its polarity and chirality,vertical grow where the polarity refers to the perpendicular direction of the core magnetization, pcore and the chirality,c, refers to the curling direction of the in-plane magnetization .

Obviously, the success of a magnetic vortex device will critically depend on the question of how to control the vortex polarity and chirality effectively. Much effort has been invested recently in developing various methods for reversing the vortex polarity and chirality with a low magnetic field. While the chirality can be reversed easily with a weak field of ,50 mT , the magnetic field required to reverse the vortex core is on the order of 500 mT, which is too large for practical use in device applications. To reduce the vortex core-reversal field, an alternative approach used a dynamic field. A promising result is also reported for an AC oscillating magnetic field set at the vortex resonance frequency, so that the vortex excitation could assist its polarity reversal. A representative example of such an approach is the vortex gyration excitation, in which the vortex core exhibits a spiral motion as an AC magnetic field is tuned on at the gyration eigen frequency. Core switching occurs subsequently through vortex–antivortex creation and annihilation6 as the core’s moving speed exceeds a critical value. The core reversal field can be reduced in such a manner to values far below 10 mT . However, this method contains a fundamental problem for applications. After the core reversal and turning off the field, the core gyration exponentially decays to its initial position. The decay radius is comparable to the lateral size of the sample and the relaxation takes a few hundred nanoseconds. This is a severe obstacle to reading the polarity. Recently, Wang and Dong and Yoo et al. found a new method of vortex core flip from numerical simulation. They demonstrated that the vortex core polarity could be switched in a radial excitation mode by a perpendicular AC magnetic field. In contrast to the gyration mode-assisted switching, which involves the vortex core motion, the radial mode-assisted core switching involves only axial symmetric oscillations, thus preserving the vortex core position. Obviously, the radial mode-assisted core switching has a completely different mechanism from the gyration mode-assisted core switching. The underlying mechanism of the radial mode-assisted core switching was not clearly shown by the simulation.

The critical field obtained by the radial mode in these studies is of the order of 20 mT , larger than the gyration mode-assisted core reversal. In this work, we studied the underlying mechanism of the radial mode oscillation and outlined a new pathway to reduce the core switching field further down to the mT range, which was more comparable to the critical field of the gyration-assisted core switching. In addition to micro-magnetic simulations, we also established a dynamical equation for the radial mode oscillation from the Landau–Lifshitz–Gilbert equation. This equation clearly explores the nonlinear behavior of the radial mode and the critical field reduction. For direct comparison of the critical field reduction, the simulation structure was set as described by Yoo et al.. According to previous studies, the radial modes are classified by the node number n . The first mode has one node, the vortex core, which means that the magnetization does not oscillate temporally at the vortex core, but the other parts almost uniformly oscillate. The second mode has two nodes; one is the vortex core and the other a concentric circle. Yoo et al. studied the resonance frequency of the individual radial mode and obtained the eigen frequencies with the same sample structure as in this study: 10.7 GHz for the first mode , 15.2 GHz for the second mode , and 20.7 GHz for the third mode . They also showed the vortex core polarity reversal using the first mode with an oscillating external field of 20 mT. To reduce the radial mode-induced critical field below 10 mT, we stimulated the first mode of the radial oscillation with a different method; that is, sweeping of the external field frequency. The field was sinusoidal with amplitude of 9 mT and the field frequency f was slowly varied from 14.0 to 6.0 GHz over 40 ns. Figure 1b shows the magnetization oscillation during frequency sweeping with time. The normalized magnetization along the thickness direction mz and the external magnetic field, Hz, were plotted together. The term ,mz. means the spatial average over the entire disk. The magnetization oscillation has the same frequency despite the phase difference. From this oscillation, we can get the oscillation amplitude of magnetization, Iz, in the thickness direction, which is half the difference between the nearest maximum and minimum values of the ,mz. oscillation.

After reaching an external field frequency of 6.0 GHz, the frequency sweeping direction was reversed and f returned to 14.0 GHz. In Fig. 1c, Iz is shown as a function of f. It is interesting to note that an external field of 9 mT can reverse the vortex core polarity. In downward sweeping of the frequency,farming vertical the almost uniform magnetization oscillation was observed on the disk except for the core conserving its width . This uniform oscillation was maintained before Iz reached the maximum amplitude of 0.28 when f was 8.7 GHz. After reaching this critical amplitude, the uniform oscillation collapsed and converged into the disk center that generated a breathing motion of the core. Such a breathing generated a strong exchange field when the core was compressed, and then core polarization switching occurred. Amplitude fluctuations near 8.5 GHz and 10.5 GHz are transition effects discussed below. In contrast to downward sweeping, the upward frequency sweeping did not reach the amplitude of 0.28, so the vortex maintained its polarity. This means that one cycle of frequency sweeping generated one core reversal. It is notable that the amplitude obtained with the fixed-field frequencies was the same as the upward sweeping. The fixed-field frequency amplitudes were determined by amplitude saturation after turning on the external oscillating field. To reverse the core polarity with the upward sweeping oscillation and fixed frequency oscillation, a larger field was required for achieving the sufficient oscillation amplitude. From this sweeping frequency simulation, it was verified that the critical field was reduced to below 10 mT and this reduction was only observed in downward sweeping because of the hysteresis behavior of the frequency.We tested the scalability of the radial mode-induced core reversal. When the radius of the disk was 120 nm, the critical field obtained by the frequency sweeping method was 9.3 mT. The core of a disk with radius 250 nm reverses its polarity with 12 mT external field. Increasing the radius, the critical field also increases. This scalability is an important property for developing data storage devices. Contrary to the radial mode-induced polarity switching, the critical field with the gyration-induced polarity switching exhibits inverse radius dependence19 as well as the chirality reversal13. Finally, we point out the chaotic behavior and the phase commensurability in the radial mode oscillation for further studies. PetitWatelot et al. observed the chaos and phase-locking phenomenon in the vortex gyration with the core reversal31. We observed similar behavior in radial mode oscillation. It is expected that a nonlinear oscillator with a sufficiently large driving force would exhibit chaotic motion. We confirmed this chaotic behavior in the radial mode of the vortex. When the oscillating field strength was smaller than Hc, a plot of the variable with respect to its time derivative, for example v _mzw versus ,mz., showed a circular trajectory. But when the field was larger than Hc, this plot becomes complex in the phase space, which manifests its chaotic behavior. Figure 5 shows examples of the chaos in the radial mode. The frequency was fixed at 13.5 GHz. When H 5 60 mT , Hc , it showed a closed circular trajectory, but when H 5 90 mT . Hc the trajectory was not closed . Further increases in the field resulted in closed trajectories. However, the trajectories were not a simple circle. To close the trajectory, 14 cycles of field oscillation are needed and during these 14 cycles, the core reversed four times. In the case of H 5 120 mT, core reversal occurred twice in five field oscillations , implying that the core reversal rate was related to the chaotic behavior.

Thus, to describe the radial mode of vortex including its chaotic behavior, the core polarity-related term32 is needed in the motion equation. In summary, we studied the nonlinear resonance of the radial mode of the vortex and found that the oscillation mode corresponding to the Duffing-type nonlinear oscillator exhibited a hysteresis behavior with respect to the external field frequency. Through the hysteresis effect, we can achieve hidden amplitude that is almost double that obtained with fixed field frequency and this amplitude multiplication effect reduces the critical field below 10 mT. In addition, we pointed out the chaotic behavior of the radial mode for further studies. We think that to complete the study on vortex dynamics, it is timely to start research on the nonlinear behavior in radial modes, as well as in other oscillations of the magnetic vortex.Targeted protein degradation has emerged over the last two decades as a promising therapeutic strategy with advantages over conventional inhibition.Unlike inhibitors, which operate through occupancy-driven pharmacology, degraders can enable catalytic and durable knockdown of protein levels using event-driven pharmacology. Most degrader technologies, such as proteolysis targeting chimeras and immunomodulatory imide drugs, co-opt the ubiquitin proteasome system to degrade traditionally challenging proteins. Intracellular small molecule degraders have demonstrated success in targeting over 60 proteins and several are currently being tried in the clinic.However, due to their intracellular mechanism of action, these approaches are limited to targeting proteins with ligandable cytosolic domains. To expand targeted degradation to the cell surface and extracellular proteome, two recent lysosomal degradation platforms have been developed. One, lysosome targeting chimeras , utilizes IgG-glycan bioconjugates to co-opt lysosome shuttling receptors.LYTAC production requires complex chemical synthesis and in vitro bioconjugation of large glycans which are preferentially cleared in the liver, limiting the applicability of this platform. A second extracellular degradation platform, called antibody-based PROTACs , utilizes bispecific IgGs to hijack cell surface E3 ligases.Due to the dependence on intracellular ubiquitin transfer, AbTACs are limited to targeting cell surface proteins, leaving the secreted proteome undruggable. Thus, there remains a critical need to develop additional degradation technologies for extracellular proteins. Here, we have developed a novel targeted degradation platform, termed cytokine receptor targeting chimeras .

The debate on the scale range of the COI demonstrates how vague and imprecise the concept really is

Winburn and Wagner acknowledged that COIs can be equated with counties but also, and potentially even more significantly, with cities and neighborhoods . Lastly, Stephanopoulos added that “communities exist, and should be represented in the legislature, at different levels of generality,” and that more specific communities can form smaller-scale districts and broader ones can be captured by larger-scale districts like the congressional type . Thus this camp answers that the COI can take a wide range of scales. The opposing camp, however, has doubted that COIs can exist at certain scales. Chambers and Monmonier were skeptical that they hold at the smaller scales, suggesting that they are larger than neighborhoods. Chambers believed that such communities have to be large in order to command a majority in a district, but he was focusing on those relevant to the congressional type, which are almost always far larger than neighborhoods . Monmonier based his case on the improved transport and communication links that have allowed communities to form that are more fragmented and extend beyond one’s residential proximity . Gardner had trouble with the idea that there could be COIs at the larger scales, musing that a congressional district of half a million or more people could hardly be deemed a single, coherent community. May and Moncrief , in their commentary on districts in the Western United States, similarly questioned whether a meaningful COI could be tied to one of the sprawling districts in rural desert environments , though Steen suggested that the fact that such districts are so rural is enough to distinguish them as salient communities . In sum, this camp retorts that the COI exists only at a narrow range of scales, and cannot be applied at the largest and smallest ends of the scale spectrum. The frequent references to the neighborhood in this literature on COIs raise the question of how related the two concepts are. These appear to be similar or at least related concepts, especially when one is focusing on the cognitive COI. But this relationship only seems to apply at a particular scale of COI; a large-scale COI made up of multiple counties is obviously not comparable to a neighborhood. Of course, one must first define what exactly a neighborhood is,vertical gardening in greenhouse which is itself an interesting and rich topic that has been approached in various ways. Scholars have given definitions deriving from more socioeconomic or demographic approaches to more cognitive ones .

The latter study adopted a cognitive approach by asking residents to indicate where they believe the boundaries of the Koreatown neighborhood to be. If one can define and identify a certain neighborhood as a region, either thematic or cognitive, one can then determine how well it corresponds to a particular scale of COI, whether the two greatly overlap or are even identical. COIs may well exist at different scales, but they are different varieties of COI, with different meanings for residents. One can discover the nature of each scale of COI by recognizing it as a cognitive region. Conceptualizing COIs as cognitive regions offers the greatest potential to discover their meaningful extents, precisely because meaning is a cognitive construct. In this research, I pursue this by soliciting people’s beliefs about the extent of their COI, giving them the freedom to make it as big or small as they choose. Such a survey can reveal the scales people most commonly use to think of COIs, thereby identifying as precisely as possible a range of scales for these cognitive regions. One can also conceive of a scale of “sense of place” by which people have different levels or types of place attachment at different scales. For example, an individual might identify very strongly with his or her city, but feel little connection to one’s county. Similarly, some people might identify more with their state than their country, while others might feel the opposite. One can even possess a strong “sense of place” at multiple scales simultaneously. Shamai demonstrated this in a study with Canadian students, finding that they held “nested allegiances” for three different levels of place: country, province, and metropolitan area. However, these students did not feel an equal degree of attachment toward each of these three scales. Rather, they felt a stronger sense of place toward their metropolitan area, followed by their country, and lastly their province. These findings have implications for COI research, because if people can identify with multiple levels of place simultaneously, they can certainly identify with multiple COIs while feeling different levels of attachment toward each. In addition to the COI criterion, the need to respect the boundaries of already existing administrative regions has long been recognized as an important objective for good redistricting . The requirement is currently used in places ranging from Japan to the United Kingdom to California .

While respecting clearly-bounded administrative regions is easier to interpret than respecting the more vaguely-bounded COIs, the two criteria may in fact be closely related. Counties and cities are often considered to be “vital, legal, and familiar communities of interest” . The residents of such jurisdictions “share a history and collective sense of identity” that help foster a genuine sense of community . Gardner contended that genuine communities arise where relevant ties form, but those bonds last only in jurisdictions with fixed boundaries. He argued furthermore that “common residency in a working, functioning, self-governing locality by itself can give rise to a political and administrative community of interest entitled to recognition. As the Colorado Supreme Court recently observed, ‘counties and the cities within their boundaries are already established as communities of interest in their own right, with a functioning legal and physical local government identity on behalf of citizens that is ongoing’” . Winburn and Wagner likewise identified counties as important COIs in the redistricting context, in large part because they play such a critical role in the electoral process, from registering voters to mailing election information to administering polling places . Bowen made a similar case with cities, as “residents of the same city share much in common—the same taxation levels, the same public problems, and the same municipal government” . These findings suggest that administrative regions may well contribute to the emergence of COIs as cognitive regions, and that the boundaries of the former may also serve as the boundaries of the latter. However, some scholars have cautioned against completely equating administrative regions with COIs. Winburn and Wagner recognized that “counties are [not] the only, or even always the most relevant, political community of interest for a citizen” . Stephanopoulos argued that the two are often different, as when interests and affiliations do not follow administrative boundaries, or when administrative regions contain multiple communities or only parts of communities. He did concede, however, that “the two may sometimes be functionally identical, both because [administrative regions] tend to be inhabited by people with similar socioeconomic characteristics, and because civic ties can foster a sense of kinship” . The consensus appears to be that administrative regions are at the very least useful proxies for COIs, if not in some sense meaningful communities themselves. Whether this is more the case for counties or cities likely depends on locational context; counties are probably more meaningful entities in rural areas than in urban areas.

My dissertation seeks to investigate the effect of both scale and administrative regions on people’s conceptions of their COI. I do so by conducting two studies. The first study seeks to determine the effects of three factors on the cognitive COIs that survey respondents depict. Those factors are the extent of the map given to survey respondents, whether the boundaries of administrative regions are shown to them on the map, and whether they live in an urban or rural locale. This study is an experimental survey of residents of an urban study area and a rural study area, greenhouse vertical farming with the manipulated variable being the type of map that residents receive. There are six types of map, because there are three possible map extents with versions that have and lack boundaries. Participants of this first study respond by drawing freehand on the map three different areas representing their COI, one being the area that is definitely within their COI, another being the area that is probably within their COI, and the last one being the area that is possibly within their COI. Requiring a series of drawings enables me to achieve a secondary aim of this study—examining variation within respondents’ cognitive COIs by having them depict different levels of confidence, in the same vein as Montello et al. . Another secondary aim is to explore how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. The second study seeks to determine the extent of the cognitive COIs that survey respondents depict, when given free rein to make their region as large or small as they want. Participants respond to this second study by ranking predefined administrative regions on the map according to how confident they are that a given area is within their COI. They do so at three different map scales—one showing large-sized areas , one showing medium-sized areas , and one showing small-sized areas . Respondents also indicate how much they identify with the COI they define at each scale, on a five-point rating scale. This enables me to achieve a secondary aim of this study— investigating whether respondents identify with multiple nested COIs at different scales, and if they do, which ones they identify with the most. Like the first study, my second study achieves the additional secondary aim of exploring how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. Both studies together allow me to determine whether COIs exist as cognitive regions at multiple scales. If they do, then I can describe the nature of these regions at those different scales, particularly whether they reflect local districts, counties, and cities.Focal therapy has the potential to improve management of prostate cancer , by reducing side effects associated with radical treatment. While the safety and feasibility of FT strategies have been reported using cryoablation,focal laser ablation ,and high intensity focused ultrasound ,long-term oncologic efficacy is unknown. A critical barrier to robust testing of FT strategies is appropriate patient selection criteria, which are not clearly established.A recent FDA-AUA-SUO workshop on partial gland ablation highlighted this challenge, noting that “some [authors] regard [partial gland ablation] as an alternative to AS for low-risk cancers, whereas others view it as an alternative to radical therapy for selected, higher risk cancers.”Regardless of approach, there is broad agreement on the importance of assessment for FT using multi-parametric MRI followed by targeted biopsy.To clarify the impact of different patient selection criteria on FT eligibility, we retrospectively studied men who have received MRI/Ultrasound fusion biopsy, incorporating both targeted and template biopsies. To confirm biopsy findings and to derive the accuracy of fusion biopsy in FT eligibility, we examined whole-organ concordance of eligibility assessment in a subset of patients who underwent radical prostatectomy.All men undergoing MRI/US fusion biopsy at UCLA between January 2010 and January 2016 were retrospectively screened for a suspicious lesion identified on mpMRI , which was found to contain CaP upon targeted biopsy . FT eligibility criteria, based on the NCCN intermediate-risk definition8 and recent consensus guidelines,were applied . Figure 2 shows histological profiles for FT eligible patients based on biopsy. Three different patterns of CaP are shown, each suitable for treatment by hemi-gland ablation or less. Men with bio-psynegative ROIs were considered ineligible for FT. Similarly, men without csCaP < 4mm were also considered ineligible , regardless of the number of positive cores. All collection of clinical data was performed prospectively within a UCLA IRB-approved registry. The fusion biopsy method, which has been previously described, was unchanged throughout the study period.Briefly, within 2 months of biopsy, patients underwent a 3T mpMRI with body coil. MRI interpretation was conducted under the direction of a dedicated uroradiologist , and suspicious lesions were assessed according to UCLA and Prostate Imaging-Reporting and Data System criteria.MRI assessment was based onthe UCLA assessment system,which pre-dates PI-RADS v1, and after PI-RADS v2 was established, by both systems using highest suspicion category found. At biopsy, images were registered and fused with real-time transrectal ultrasound to generate a 3D image of the prostate with delineated ROIs.

These diverse priorities will place important constraints on animal agriculture in the coming decades

Although the detailed reaction mechanism has not yet been identified, discovery of this distinct function of a methane-producing PLP-dependent enzyme could presage a breakthrough in the practical application of methanotrophs. Diversifying genetic regulatory modules can allow delicate control of synthetic pathways that are activated on demand according to host plant physiology. Fascinating potential targets for dynamic regulation are small molecules involved in plant–microbe interactions and plant stress response. Ryu et al. recently constructed biosensors for natural and non-natural signaling molecules that enabled control of N fixation in various microbes. More recently, Herud-Sikimić et al. engineered an E. coli Trp repressor to a FRET-based auxin biosensor that undergoes conformational change in the presence of auxin-related molecules but not L-tryptophan Because the conformational change induced by L-tryptophan is a core function in the Trp operon, the engineered Trp repressor may allow auxin-dependent biosynthesis. Developing dynamic regulatory circuits for controlling expression of PGP traits may help maintain the viability of engineered host microbes in pre-existing microbiomes and thereby facilitate their potential contributions to sustainable agriculture. In nature, plants interact with multiple PGPRs whose properties may work cooperatively to provide benefits. For example, Kumar et al. observed synergistic effects of ACC deaminase- and siderophore-producing PGPRs that enhanced sunflower growth. This result implies that layering PGP traits in a host strain under single or multiple regulatory circuits may maximize their advantages. Furthermore, microbiome engineering inspired by native PGPR colonization, for example,through siderophore-utilizing ability,dutch bucket for sale may open a new era for sustainable agriculture via customized PGPR consortia. Agricultural science has been enormously successful in providing an inexpensive supply of high-quality and safe foods to developed and developing nations. These advancements have largely come from the implementation of technologies that focus on efficient production and distribution systems as well as selective breeding and genetic improvement of cultured plants and animals.

Although population growth in developed nations has reached a plateau, no slowdown is predicted in the developing world until about 2050, when the population of the world is expected to reach 9 billion . To meet the global food demand will require nearly double the current agricultural output, and 70% of that increased output must come from existing or new technologies . The global demand for animal products is also substantially growing, driven by a combination of population growth, urbanization, and rising incomes. However, at present, nearly 1 billion people are malnourished . Animal products contain concentrated sources of protein, which have AA compositions that complement those of cereal and other vegetable proteins, and contribute calcium, iron, zinc, and several B group vitamins. In developing countries where diets are based on cereals or bulky root crops, eggs, meat, and milk are critical for supplying energy in the form of fats. In addition, animal-derived foods contain compounds that actively promote long-term health, including bio-active compounds such as taurine, l-carnitine, creatine, and endogenous antioxidants such as carosine and anserine . Furthermore, those foods are a rich source of CLA, forms of which have anti-cancer properties , reduce the risk of cardiovascular disease , and help fight inflammation .Animal production will play a pivotal role in meeting the growing need for high-quality protein that will advance human health. Our technological prowess will be put to the test as we respond to a changing world and increasingly diverse stakeholders. Intensifying food production likely will be confounded by declining feed stock yields due to global climate change, natural resource depletion, and an increasing demand for limited water and land resources . Additionally, whereas the moral imperative to feed the malnourished people of the world is unequivocal, a well-fed, well-educated, and vocal citizenry in developed nations places a much greater emphasis on the environmental sustainability of production, the safety of food products, and animal welfare, often without regard for impact on the cost of the food. Despite these daunting challenges, the sheer magnitude of potential human suffering calls on us to assume the reins from our recently lost colleague, Norman Borlaug, to harness technological innovation within our disciplines to keep world poverty, hunger, and malnutrition at bay.

As was the case during the Green Revolution, advancements in genetics and breeding will provide a wellspring for a needed revolution in animal agriculture. Indeed, we have entered the era of the genome for most agricultural animal species. Genetic blueprints position us to refine our grasp of the relationships between genotype and phenotype and to understand the function of genes and their networks in regulating animal physiology. The tools are in hand for accelerating the improvement of agricultural animals to meet the demands of sustainability, increased productivity, and enhancement of animal welfare .The goals of animal genetic improvement are firmly grounded in the paradigm of animal production, which naturally refers to concepts of efficiency, productivity, and quality. Sustainability and animal welfare are central considerations in this paradigm; an inescapable principle is that the maximization of productivity cannot be accomplished without minimizing the levels of animal stress. Furthermore, the definition of efficiency requires sustainability. Unnecessary compromises to animal well-being or sustainability are morally reprehensible and economically detrimental to consumers and producers alike. The vast majority of outcomes from genetic selection have been beneficial for animal well-being. Geneticists try to balance the enrichment of desirable alleles with the need to maintain diversity because they are keenly aware of the vulnerability of monoculture to disease. Genetic improvement programs must always conserve genetic diversity for future challenges, both as archived germplasm and as live animals . However, unanticipated phenotypes occasionally arise from genetic selection for 2 reasons. First, every individual carries deleterious alleles that are masked in the heterozygous state but can be uncovered by selective breeding. Second, the linear organization of chromosomes leads to certain genes being closely linked to each other on the DNA molecules that are transmitted between generations. Thus, blind selection for an allele that is beneficial to 1 trait also enriches for all alleles that are closely linked to it and either through pleiotropy or linkage disequilibrium, undesirable correlated responses in other traits may occur.

Geneticists are aware of this and closely monitor the health and well-being of populations that are under selection to ensure that any decrease in fitness is detected and that ameliorative actions are taken to correct problems either by the elimination of carriers from production populations, altering the selection objective to facilitate improvement in the affected fitness traits, or by introducing beneficial alleles by crossbreeding. Increasingly precise molecular tools now allow the rapid identification of genetic variants that cause single-gene defects and facilitate the development of DNA diagnostics to serve in genetic management plans that advance the production of healthy animals. Whole-genome genotyping with high-density, SNP assays will enable the rapid determination of the overall utility of parental lines in a manner that is easily incorporated into traditional quantitative genetic improvement programs . The approach is known as genomic selection and essentially allows an estimation of the genetic merit of an individual by adding together the positive or negative contributions of alleles across the genome that are responsible for the genetic influence on the trait of interest. Under GS,hydroponic net pots genetic improvement can be accelerated by reducing the need for performance testing and by permitting an estimation of the genetic merit of animals outside currently used pedigrees. Genomic selection also provides for development of genetic diagnostics using experimental populations, which may then be translated to commercial populations, allowing, for the first time, the opportunity to select for traits such as disease resistance and feed efficiency in extensively managed species such as cattle. The presence of genotype × environment interactions will also require the development of experimental populations replicated across differing environmental conditions to enable global translation of GS. The speed with which the performance of animals can be improved by GS is determined by generation interval, litter, or family size, the frequency of desirable alleles in a population , and the proximity on chromosomes of good and bad alleles. Although predicting genetic merit using DNA diagnostics may be less precise than directly testing the performance of every animal or their offspring, the reduction in generation interval by far offsets this. For example, in dairy populations, the rate of genetic improvement is expected to double with the application of GS . Preliminary results from the poultry industry suggest that GS focused on leg health in broilers and livability in layers can rapidly and effectively improve animal welfare . Although price constraints currently limit the widespread adoption of high-density SNP genotyping assays in livestock species, low-cost, reduced-subset assays containing the most predictive 384 to 3,000 SNP are under development in sheep, beef, and dairy cattle.

These low-cost assays are expected to be rapidly adopted and will be expanded in content as the price of genotyping declines. Animal selection based on GS is also expected to reduce the loss of genetic diversity that occurs in traditional pedigree-based breeding because the ability to obtain estimates of genetic merit directly from genotypes avoids the restriction of selection to the currently used parental lineages. Also, despite the increase in the rate of genetic improvement, selection for complex traits involving hundreds or thousands of genes will not result in the rapid fixation of desirable alleles at all of the underlying loci.Whereas GS will accelerate animal improvement in the post genomic era, parallel and overlapping efforts in animal improvement based on genome-informed genetic engineering must ensue to ensure that productivity increases at pace with the expanding world populations. The tools of functional genomics and the availability of genome sequences provide detailed information that can be used to engineer precise changes in traits, as well as monitor any adverse effects of such changes on the animal . These tools are also enabling a deeper understanding of gene function and the integration of gene networks into our understanding animal physiology . This understanding has begun to identify major effect genes and critical nodes in genetic networks as potential targets for GE.The genomics revolution has been accompanied by a renaissance in GE technologies. Novel genes can be introduced into a genome , and existing genes can either be inactivated or their expression tuned to desirable levels using recently developed RNA interference . The specificity and efficiency of these approaches is expected to continue to improve. The technical advancements in GE are so significant that Greger advocated that scrutiny of the procedures for generating transgenic farm animals is undeserved and that discussion should focus on the welfare implications of the desired outcome instead of unintended consequences of GE. This position is also reflected by the rigorous regulatory mechanism established by the FDA for premarket approval of GE animals , which considers the risks of a given product to the environment and the potential impact on the well-being of animals and consumers. Indeed, this review mechanism was recently adopted as an international guideline by Codex Alimentarious , which has already found GE to be a safe and reliable approach to the genetic improvement of food animals . In addition, guidelines that promote good animal welfare, enhance credibility, and comply with current regulatory requirements, for the development and use of GE animals have been developed as a stewardship guidance . The stewardship guidance assists the industry and academia in developing and adopting stewardship principles for conducting research and developing and commercializing safe and efficacious agricultural and biomedical products from GE animals for societal benefit.Both GS and GE are viable, long-term approaches to genetic improvement, but when should one approach be employed over the other? Genes are not all equal in their effects upon changes in phenotype. The products encoded by some genes have major effects on biochemical pathways that define important characteristics or reactions in an organism. Other genes have lesser, but sometimes still important, effects. In general, genetic modification by GE is used to add major-effect genes, whereas genetic selection is applied to all genes, including the far larger number of lesser-effect genes that appear to be responsible for about 70% of the genetic variation within a given trait . One of the most significant advantages of GE is the ability to introduce new alleles that do not currently exist within a population, in particular, where the allele substitution effect would be very large. This approach can include gene supplementation and genome editing, the latter enabling the precise transfer of an alternative allele without any other changes to the genome of an animal .

The most common application of forward osmosis treatment methods is seawater desalination

The forward osmosis desalination process usually includes osmotic dilution of draw solution and freshwater production from diluted draw solution. There are two types of forward osmosis desalination based on the different water production methods. One applies heat sinking draw solution that broke down into volatile gases , these gases could also be recycled during the thermal decomposition and generate high osmotic pressure. The other is used as filtration or dilution of water. For instance, the combination of reverse osmosis and forward osmosis could be used for drinking water treatment or brine removal, forward osmosis could also be a fully or partly replacement of ultrafiltration under certain circumstances. Recent studies in materials science also proved that forward osmosis could be used to control drug release in the human body, it could also control the food concertation in the production phase. Regarding the semi-permeable membrane used in Forward osmosis, the tubular membrane is more functional for many reasons. The tubular membrane is one of the membranes that allow solution flows bidirectionally of the membrane, it maintains high hydraulic pressure without deformation due to the self-supported feature, it is also easier to fabricate while retaining high flexibility and density. Although there is a substantial amount of energy required to treat seawater using Forward Osmosis technology, its potential has been demonstrated through bench-scale experiments,fodder system indicating further investigations are needed to evaluate its commercial application. Seawater desalination has provided freshwater for over 6% of the world’s population. One of the commonplace models of forward osmosis seawater treatment is using a hollow fiber membrane. The key parameter in the hollow fiber membrane model is the minimum draw solution flow rate.

When the flow rate increases, the energy requirement increases as well. In an ideal Forward Osmosis process, CDO and CFI should be equal. Figure 2-3 below shows the schematic diagram of the forward osmosis membrane module. To assess the energy consumption in the FO process, the solution concentrations and flow direction of the module should be determined first. The data supports that the energy required for pumping the draw solution is less than that for pumping feed solution. To determine the effects of the direction of hydraulic pressure in the module, different modules with various concentration solutions and flow rates are designed to compare the energy efficiency. In conclusion, the results demonstrate that to reduce the energy consumption of seawater desalination, the FO module need to optimize these diameters. Also, the flow rates and concentrations of draw and feed solutions play a major role in terms of energy efficiency. The module illustrates that when a high flow rate feed solution is on the shell side and a draw solution with a low flow rate is on the lumen side, the system consumes less energy consumption. Another vital implementation of Forward Osmosis is food concentration/enrichment. Multiple studies concluded that FO is efficient when it comes to dewatering for food production.  Compared to the traditional concentration method, such as pressure driven membrane, FO requires less energy and yields less nutrition loss. Nutrition loss refers to the reduction of monomers fructose here. A closed-loop feed solution and draw solution system are built as figure 2-4 below. Garcia-Castello tested two membranes in the system above. A flat sheet of cellulosic membrane and an AG reverse osmosis membrane. AG membrane refers to a certain designation of membrane manufactured by Sterlitech. The result shows that the AG membrane has a higher salt rejection rate. During the procedure, once the water flux reaches a constant value, a feed stock solution is added to the tank to reach the next feed solution concentration.

At the end of the experiment, the highest feed solution is 1.65M sucrose. By comparing performances of different membranes, the AG membranes yield better results when concentrating on sucrose solution due to its tucker support structure. The temperature also has a significant impact on water flux. Usually, higher temperature yields higher water fluxes. Compared to the concentration factor of RO, FO has a better concentration factor of 5 while it requires much less energy.Fertilizer drawn forward osmosis applies the forward osmotic dilution of the fertilizer draw solutions. This technology could be used for direct agricultural irrigation. Fortunately, most of the fertilizers could be used as a draw solution for FDFO. Fertilizer drawn forward osmosis shares the same principle with forward osmosis. Freshwater as feed solution flows through the semi-permeable membrane to the fertilizer draw solution under the natural osmotic pressure. Additional treatments might be required to reach the water quality for different purposes. Regarding the nitrogen removal purpose for this review, operating conditions such as feed solution concentration, feed solution water flow rate, and specific water flux can affect the effectiveness of nitrogen removal. Fertilizer-drawn forward osmosis has common applications in water recycling and fertigation applications. Nanofiltration is a viable solution for diluting the fertilizer draw solution for recycling purposes. Fertilizer-draw forward osmosis technology has used brackish water, brackish groundwater, treated coal mine water, and brine water as the feed solutions. In another word, water that has a relatively lower total dissolved solid could be feed solution for fertilizer drawn forward osmosis. Moreover, fertilizer drawn forward osmosis is also effective on biogas energy production when it is applied to an anaerobic membrane bioreactor as a hybrid process. In conclusion, fertilizer drawn forward osmosis is effective for sustainable agriculture and water reuse. Its considerable recovery rate could be used as the hydroponics part in an anaerobic membrane bioreactor . Due to the scarcity of fresh water in arid areas, hydroponics has been used for vegetable production. In the field of hydroponics, a subset of hydroculture, crops are cultivated in a soilless environment, their roots are exposed to mineral nutrient solutions or fertilizers. Without soil culture, this type of agricultural production precludes certain aspects that are associated with traditional crops production, including soil pollution, lower fertilizer utilization efficiency, or spread of pathogens. This technology also allows the production of crops in arid, infertile, or simply too populated areas. However, economic cost aside, this technique requires both a large amount of fresh water and fertilizers compared with soil-based crops production. This could easily cause detrimental effects to the environment such as water waste and contamination, excessive nitrogen, potassium, and phosphate resulting in eutrophication. To achieve the balance between cost, efficiency, and quality, reverse osmosis and ultrafiltration are more advanced and general approaches compared to biological seawater treatments. In terms of treating seawater, the hydroponic nutrient solutions demonstrate similar performance compared with other aqueous solutions of a lower molecular weight salt. By utilizing certain membrane technologies, treated effluent has reduced the presence of pathogens and remained the ability to be better integrated into the fertigation system for direct application. The potential of the fertilizer drawn forward osmosis process was investigated for brine removal treatment and water reuse through energy-free osmotic dilution of the fertilizer for hydroponics. Nanofiltration is a pressure-driven membrane process, it refers to a special membrane process that removes dissolved solutes. The membrane is with pores ranging from 1 to 10 nanometers, hence the name “nanofiltration”. Nanofiltration uses a similar principle as reverse osmosis, it is a water purification process that requires pressure,fodder system for sale and its membranes are permeable to ions. Nanofiltration is practical in removing organic substances from coagulated surface water, it is also economic and environmentally sustainable. In terms of size and mass of solvents removed by nanofiltration membranes, they usually operate in the range between reverse osmosis and ultrafiltration: removing organic molecules with molecular weights from 200 to 400. Nanofiltration membranes can also effectively remove other pollutants including endotoxin/pyrogen, pesticides, antibiotics, soluble salts, etc.

Depending on the type of salt, it has various removal rates. For salts containing divalent anions, such as magnesium sulfate, the removal rate is around 90% to 98%. However, regarding salts containing monovalent anions, such as sodium chloride or calcium chloride, the removal rate is lower, which is between 20% to 80%. The osmotic pressure across the membrane is typically 50-225 psi. One of the advantages of Nanofiltration is that it uses lower pressure and sustains higher water flux. Plus, it has highly selective rejection properties. Typical applications for nanofiltration membrane systems include the removal of color , total organic carbon from surface water, reduction of total dissolved solids , and the removal of hardness or radium for well water. In 1952, Congress passed the Saline Water Conversion Act, which is aimed at resolving the shortage of freshwater and excessive use of underground water. Two years after the act, the first desalination plant in the United States was built in 1954 at Freeport, Texas. The planet is still operative to date and is undergoing improvement. U.S. Department of Agriculture predicts to supply 10 million gallons of fresh water per day in 2040. The Claude “Bud” Lewis Carlsbad Desalination is the largest desalination plant in the U.S. The plant delivers almost 50 million gallons of fresh water to San Diego County daily. Due to objective conditions, desalination has prevailing existence in regions such as the Middle East, where the largest desalination plant worldwide stands in terms of freshwater production. With 17 reverse osmosis units and 8 multi-stage flashing units, the plant can produce more than 1,400,000 cubic meters of fresh water per day. In 1960, there were only 5 desalination plants in the world. By the mid-1970s, as the conditions of many rivers deteriorated, around 70% of the world’s population could not be guaranteed sanitary and safe freshwater. As a result, water desalination has become a strategic choice commonly adopted by many countries in the world to resolve the shortage of fresh water, its effectiveness and reliability have been widely recognized. The limitation and uneven distribution of freshwater resources have been one of the most prevailing and serious problems faced by people living in arid areas. To reduce its severity, saline water or wastewater desalination has always been a constantly researched and applied solution. In many arid regions, the desalination of seawater is evaluated as a promising solution. Despite that seawater holds around 96.5% of global water resources , the global-scale application of seawater desalination is hindered by the cost, both financially and energy-wise. With the development of energy-saving technologies for seawater desalination, it is viable to use saline, such as seawater and brackish water to produce freshwater for industries and communities. Commonly used methods require water pumping and a considerable amount of energy. As a result, forward osmosis is receiving increasing interest in this field since the FO process requires much less energy. One of the research teams at Monash University in Australia has demonstrated a solar-assisted FO system for saline water desalination using a novel draw agent. The research team led by Huanting Wang and George P. Simon has investigated the potential of a thermoresponsive bilayer hydrogel-driven FO process utilizing solar energy to produce fresh water by treating saline water. This Forward osmosis process is equipped with a new draw agent: a thermo responsive hydrogels bilayer. Compared to one of the most used draw agents , this duallayered hydrogel is made of sodium acrylate and N-isopropyl acrylamide , which induces osmotic pressure differences without the need for regeneration. The thermo responsive hydrogels layers generate high swelling pressure when absorbing water from high-concentrated saline. During testing, researchers used a solution of 2,000 ppm of sodium chloride, which is the standard NaCl concentration for brackish water. Water passes through the semipermeable membrane and is drawn from saline solution to the absorptive layer . The hydrogel can absorb water up to 20 times larger than its regular volume. Next, the thermo responsive hydrogel composed only of NIPAM then absorbs water from the first layer. When the dewater layer is heated to 32 °C, which is the lower critical solution temperature , the gel collapses and squeezed out the absorbed fresh water. Draw agents like ammonium bicarbonate are required to be heated up to 60 °C, then distilled at a lower temperature for regeneration. By focusing the sunlight with a Fresnel lens, the concentrated solar energy can help dewatering flux reach 25 LMH after 10 minutes, which is similar to the water flux of ammonium bicarbonate. 

Network analysis methods are used to analyze the resulting relational structure of the mental model

Furthermore, 15N-Glu-feeding experiments indicated that tea plants can absorb exogenously applied amino acids that can then be used for N assimilation. In addition, we demonstrated that CsLHT1 and CsLHT6 are involved in the uptake of amino acids from the soil in the tea plant.It has been suggested that tea plants grown inorganic tea plantations are subjected to N-deficient conditions due to the absence of inorganic fertilizer. Compared with conventional tea, that produced under organic management systems contains higher levels of catechins that are linked to antioxidant effects of tea infusions. However, organic tea contains lower levels of amino acids that are also important compounds in terms of tea quality. The decay of large amounts of pruned tea shoots may contribute significantly to soil amino-acid levels inorganic tea plantations; the decomposition of such organic matter and nutrient recycling depends largely on soil fungi. Interestingly, the long-term application of high amounts of N fertilizer was found to reduce soil fungal diversity in tea plantations. This likely could account for why we observed higher amino-acid contents in the organic tea plantation compared with the conventional tea plantation . This implies a more important role for soil amino acids in tea plant grown inorganic tea plantations.It has been reported that, in addition to inorganic N, amino acids can support tree growth. As a perennial evergreen tree species, the tea plant can also use organic fertilizer. However, the role of soil amino acids in tea plant growth and metabolism has not yet been investigated. In this study, we observed that the tea plant could take up 15N-Glu, and Glu feeding increased the aminoacid contents in the roots . This revealed that tea plants can take up amino acids from the soil for use in the synthesis of other amino acids. In our study, nine amino acids were detected in the soil of an organic tea plantation, and the utilization of exogenous Glu was analyzed in detail. In future studies,hydroponic nft it will be important to test the roles of various mixtures of amino acids for use as fertilizers for the growth and metabolism of the tea plant.

The molecular mechanism underlying the uptake of amino acids from the soil by trees has not been thoroughly studied. In this study, we identified seven CsLHTs that were grouped into two clusters, which was consistent with LHTs in Arabidopsis . CsLHT1 and CsLTH6 in cluster I have amino-acid transport activity , which is also consistent with AtLHT1 and AtLHT6. Moreover, these two genes were highly expressed in the roots and both encode plasma membrane-localized proteins . These findings support the hypothesis that CsLHT1 and CsLHT6 play important roles in amino-acid uptake from the soil . However, the members of cluster II, CsLHT2, CsLHT3, CsLHT4, CsLHT5, and CsLHT7, did not display amino-acid transport activity . Interestingly, except for AtLHT1 and AtLHT6, there are no other AtLHTs being shown to transport amino acids. It is possible that cluster II LHTs are involved in the transport of metabolites other than amino acids. For example, AtLHT2 was recently shown to transport 1-aminocyclopropane-1- carboxylic acid, a bio-synthetic precursor of ethylene, in Arabidopsis.LHT1 has been thoroughly characterized as a high affinity-amino-acid transporter and has a major role in the uptake of amino acids from the soil in both Arabidopsis and rice. In contrast, there is only one report on the function of AtLHT6; it is highly expressed in the roots, and the atlht6 mutant presented reduced aminoacid uptake from media when supplied with a high amount of amino acids. Although the authors did not characterize the amino-acid transport kinetics for AtLHT6, their results are consistent with this protein being a low-affinity-amino-acid transporter. In the present study, we characterized CsLHT1 to be a high-affinity amino-acid transporter , with a capacity to transport a broad spectrum of amino acids . By contrast, CsLHT6 exhibited a much lower affinity for 15N-Glu, and it also displayed higher substrate specificity. Considering that amino-acid concentrations in the soil of tea plantations are low , CsLHT1 may play a more important function than CsLHT6 in the uptake of amino acids from the soil into tea plants. However, in soils, amino-acid contents could be much higher, locally, particularly in the vicinity of decomposing animal or vegetable matter. In this situation, CsLHT6 may play an important role in the uptake of amino acids. In addition,CsLHT6 is also highly expressed in the major veins of mature leaves , suggesting a role for CsLHT6 in amino-acid transport within these tea leaves.

Given that protocols for the efficient production of transgenic tea cultivars are lacking, CsLHT1 and CsLHT6 expression cannot be modulated by either over expression or CRISPR/Cas9 gene editing. However, in China, there is an abundance of tea plant germplasm resources. CsLHT1 and CsLHT6 are potential gene markers for selecting germplasms that can efficiently take up amino acids. Moreover, germplasms with high CsLHT1 or CsLHT6 expression can be used as root stocks for grafting with elite cultivars to improve the ability of these cultivars to take up amino acids from the soil. Alternatively, these germplasms can be utilized through gene introgression. These grafted lines that can efficiently take up amino acids or novel cultivars should be better suited for use inorganic tea plantations than in conventional tea plantations.One of the core goals of sustainability science is understanding how practitioners make decisions about managing social-ecological systems . In the context of sustainable agriculture, an important research objective is quantifying the economic, environmental, and social outcomes of different farm management practices . However, it is equally important to understand how farmers conceptualize the idea of sustainability and translate it into farm management decisions. The innumerable and often vague definitions of sustainable agriculture make this a challenging task, and fuel the debate about linking sustainability knowledge to action. This debate will remain largely academic without empirical analysis of how farmers think about sustainability in real-world management contexts. These questions are not only relevant to agriculture, but also to all social-ecological systems and the knowledge networks that are in place to support decision making. This paper addresses these issues by analyzing farmer “mental models” of sustainable agriculture. Mental models are empirical representations of an individual’s or group’s internally held understanding of their external world . Mental models reflect the cognitive process by which farmer views about sustainable agriculture are translated into farm management decisions and practice adoption. Our mental models were constructed from content coding of farmers’ written definitions of sustainable agriculture, and were analyzed using network methods to understand the relational nature of different concepts making up a mental model.

We test three hypotheses about mental models of sustainable agriculture. First, mental models are hierarchically structured networks, with abstract goals of sustainability more central in the mental model, which are linked to peripheral concrete strategies from which practitioners select to attain the goals. Second, goals are more likely to be universal across geographies, whereas strategies tend to be adapted to the specific context of different social-ecological systems. Third, practitioners who subscribe to central concepts in the mental model will more frequently exhibit sustainability-related behaviors, including participation in extension activities and adoption of sustainable practices. Our mental model data were drawn from farmers in three major American viticultural areas in California: Central Coast, Lodi, and Napa Valley. California viticulture is well suited for studying sustainability. Local extension programs have used the concept of sustainability since the 1990s ,hydroponic channel and farmer participation in sustainability programs is strong . Furthermore, viticulture is geographically entrenched , with viticultural areas established on the basis of their distinct biophysical and social characteristics . Hence, we expect wine grape growers to have well-developed mental models of sustainability, with geographic variation reflecting social-ecological context.or group’s internally held understanding of the external world . Group mental models, which are the focus of this paper, represent the collective knowledge and understanding of a particular domain held by a specific population of individuals. Mental models are an empirical snapshot of the cognitive process that underpins human decision making and behavior. Mental models complement more traditional approaches to understanding environmental behavior by highlighting the interdependent relationships among attitudes, norms, values, and beliefs . For example, the Values-Beliefs-Norms model of environmental behavior hypothesizes a causal chain running from broad ecological values, to beliefs about environmental issues, to more specific behavioral norms. The network approach used here shows how these types of more general and specific concepts are linked together in a hierarchical and associative structure. Mental models have evolved into an important area of research in environmental policy, risk perception, and decision making . A growing number of researchers are using mental models to better understand decision making in the context of social-ecological systems . Two approaches that are especially relevant to this paper are Actors, Resources, Dynamics, and Interactions and Consensus Analysis . The ARDI approach uses participatory research methods to construct a group mental model of the interactions among stakeholders, resources, and ecological processes . The final product is a graphic conceptualization of how the group perceives the social-ecological system, its components, and their place in it, which can be used to inform management strategies. The CA approach relies on similar data-collection techniques to elicit a group mental model that captures stakeholders’ beliefs and values pertaining to how the social-ecological system should be managed and for what purpose . The mental models are then analyzed using quantitative methods to assess agreement among individuals and identify points for consensus. Along with addressing research questions about practitioner knowledge and decision making, both approaches have been used to facilitate multi-stake holder management of social-ecological systems . This paper conceptualizes group mental models as “concept networks” comprised of nodes representing unique concepts and ties representing associations among concepts.The concept network approach is different from ARDI and CA in that network analysis methods are used to analyze the structure of mental models and measure the importance of individual concepts based on their position in the concept network. This approach follows from Carley’s work , which is founded in the theoretical argument that human cognition operates in an associative manner .

When a given concept is presented to the individual, memory is searched for that concept, ties between the concept and associated concepts are activated, and associated concepts are retrieved. The more associations a given concept has, the more likely the concept is to be recalled. Highly connected concepts serve as cognitive entry points for accessing a constellation of associated ideas. We elicited our mental models from written text of farmers’ definitions of sustainable agriculture, and follow Carley in arguing that written language can be taken as a symbolic expression of human knowledge . It is important to note that our mental models deviate from Carley’s in that the associations among concepts are nondirectional and do not represent causality between concepts. Ties in our concept network represent concept co-occurrence, where two concepts occurred together in a single definition of sustainable agriculture. See Methods for more details.Hypothesis 1 is that mental models are hierarchically structured, with abstract concepts constraining the cognitive associations among more concrete concepts. For example, practitioners who define sustainability primarily as environmental responsibility versus economic viability may evaluate the benefits and costs of management practices with different criteria. This perspective is related to models of political belief systems where specific attitudes on public policy issues are predicted by general beliefs about policies and core values . Construal-level theory also suggests that hierarchical belief-systems contain abstract, superordinate goals related to subordinate beliefs about actions needed to achieve them . The hierarchical structure reflects a basic principle of cognitive efficiency in taxonomic categorization , where more abstract concepts provide cognitive shortcuts to retrieve specific linked attributes . The concepts making up mental models of sustainability can be divided into two basic types, each with different levels of abstraction: goals and strategies . Abstract goals are desirable properties, attributes, and characteristics of a sustainable system to be realized. Examples taken from this study include environmental responsibility, economic viability of the farm enterprise, continuation into the future, or soil health and fertility. Strategies are more concrete and include practices or approaches that are thought to contribute to the realization of abstract goals.

It was found that rate of cortical death was faster in hexaploid wheat and positively associated with root age

The present study was conducted to address the dosage effect of 1RS translocation in bread wheat. We used wheat genotypes that differ in their number of the 1RS translocations in a spring bread wheat ‘Pavon 76’ genetic background. For generating F1 seeds, Pavon 1RS.1AL was the preferred choice due to its better performance for root biomass than other 1RS lines . Here, we report the dosage effect of a 1RS chromosome arm on the morphology and anatomy of wheat roots. The results from this study validate previous results of the presence of genes for rooting ability on the 1RS chromosome arm. This study also provides evidence for presence of genes affecting root anatomy on 1RS. From previous chapters of this dissertation and earlier studies , it was clear that there was a gene present on 1RS chromosome arm which affects root traits in bread wheat. But there was no report on the chromosomal localization of any root anatomical trait in bread wheat. The purpose of this study was to look for variation in root morphology and anatomy among different wheat genotypes and then determine how these differences are related to different dosages of 1RS in bread wheat. During this study, we came to some very interesting conclusions: 1) F1 hybrids showed a heterotic effect for root biomass and there was an additive effect of the 1RS arm number on root morphology of bread wheat; 2) There was a specific development pattern in the root vasculature from top to tip in wheat roots and 1RS dosage tended to affect root anatomy differently in different regions of the seminal root. Further, the differences in root morphology,hydroponic gutter and especially anatomy of the different genotypes have specific bearing on their ability to tolerate water and heat stress. The effect of number of 1RS translocation arms in bread wheat was clearly evident from their averaged mean values for root biomass. RA1 and RAD4 were ranked highest while R0 ranked at the bottom .

These results supported the previous studies on the performance of wheat genotypes with 1RS translocation where 1RS wheats performed better in grain yield but similar for shoot biomass . Genotype RD2 performed slightly better than R0 for root biomass because of its poor performance in one season otherwise it showed better rooting ability in the other three seasons. Here, all the genotypes with 1RS translocations showed higher root biomass than R0 which carried a normal 1BS chromosome arm. Data in this study suggested two types of effects of 1RS on wheat roots. First, an additive effect of 1RS, there was increase in root biomass with the increase in 1RS dosage from zero to two and then to four . Second was a heterotic effect of 1RS on root biomass and shoot biomass. MPH and HPH of the F1 hybrid were higher for root biomass than for shoot biomass . This further explained the more pronounced effect of 1RS on root biomass than shoot biomass. Significant positive heterosis was observed for root traits among wheat F1 hybrids and twenty seven percent of the genes were differentially expressed between hybrids and their parents . The possible role of differential gene expression was suggested to play a role in root heterosis of wheat and other cereal crops . In a recent molecular study of heterosis, it was speculated that upregulation of TaARF, an open reading frame encoding a putative wheat ARF protein, might be contributing to heterosis observed in wheat root and leaf growth There is large void in root research involving study of root anatomy in wheat as well as other cereal crops. Most of the anatomical literature is either limited to root anatomy near the base of the root or near the root tip in young seedlings . There is still a general lack of knowledge about the overall structure and pattern of whole root vasculature during later stages of the growth in cereals especially in wheat. In the present study, root anatomical traits were studied in the primary seminal root of different wheat genotypes containing different dosages of 1RS translocation arms at mid-tillering stage .

Root sections were made from three regions along the length of the root, viz. top of the root, middle of the root and root tip, to get an overview of the complete structure and pattern of root histology relative to differences in 1RS dosage. Comparison of different regions of root of a genotype showed a transition for metaxylem vessel number and CMX area from higher in top region of the root to a single central metaxylem vessel in the root tip. Diameter of the stele also became narrower towards the root tip as the plant roots grow into deeper layers of soil. In the root tip only central metaxylem vessel diameter and area were traceable as other cell types were still differentiating. This developmental pattern was consistent across the different wheat genotypes used in this study. Interestingly, there was variation in timing for the transitions in root histology among genotypes and this variation was explained by dosage of 1RS arm in bread wheat. RD2 and RAD4 transitioned earlier from having multiple metaxylem vessels and a larger stele to a single, central metaxylem vessel and smaller stele than did R0 and RA1. In the top region, all the root traits were significantly different among genotypes except average CMX vessel diameter and CMX vessel number . Here, the average CMX diameter was calculated from the average of diameters of all the CMX number of that subsequent genotype and hence, the number of CMX vessels, was different in each genotype so was the total CMX vessel area. Interestingly, all the root traits in the top region showed negative slope in regression analysis and most of them were significant especially stele diameter, total CMX vessel area, and peripheral xylem pole number. Variation in all the traits was explained by number of 1RS dosages in wheat genotypes and root traits were smaller with higher number of 1RS dosage . Significant positive correlation among almost all the root traits from topregion and mid-region of the roots suggested their interdependences in growth and development. Root diameter could not be measured for all the replicates of each genotype because of the degeneration and mechanical damage to the cortex and epidermis.

Earlier, a study on the rate of cortical death in seminal roots was investigated in different cereals.In the root tip, only two traits, CMX vessel area and CMX vessel diameter, were traceable because of the status of root tip development . Negative slope and significant R2 value in regression analysis explained the effect of 1RS dosage on the CMX vessel area and CMX vessel diameter. This suggested narrow metaxylem vessels with increase in 1RS dosage . In roots, central metaxylem vessel is the first vascular element to be determined and differentiate . Here, serial cross sections of the root tips also confirmed it as the first differentiated vascular element in wheat. The other vascular components differentiate thereafter in relation to first formed metaxylem vessel . Feldman first reported that all the metaxylems were not initiated at the same level. Root morphology and root architecture are responsible for the water and nutrient uptake while in root anatomy, xylem vessels are essential for their transportation to the shoots to allow continued photosynthesis. Variations in xylem anatomy and hydraulic properties occur at interspecific, intraspecific and intraplant levels . Variations in xylem vessel diameter can drastically affect the axial flow because of the fourth-power relationship between radius and flow rate through a capillary tube, as described by the Hagen–Poiseuille law . Thus, even a small increase in mean vessel diameter would have exponential effects on specific hydraulic conductivity for the same pressure difference across a segment . Xylem diameters tend to be narrower in drought tolerant genotypes ,u planting gutter and at higher temperature . Smaller xylem diameters pose higher flow resistance and slower water flow which helps the wheat plant to survive water stressed conditions. Richards and Passioura increased the grain yield of two Australian wheat cultivars by selecting for narrow xylem vessels in seminal roots. The results of this study showed that the presence of 1RS in bread wheat increased the root biomass and reduced the dimensions of some root parameters especially the central metaxylem vessel area and diameter in the root tip as well as in the top of the root . Manske and Vlek also reported that wheat genotypes with 1RS translocated chromosome arm had thinner roots and higher root-length density compared with normal wheat with 1BS chromosome arm under field conditions. These results might suggest higher root number or extensive root branching in 1RS translocation wheats. Among 1RS translocation wheats, significant association was observed between root biomass and grain yield under well-watered and droughted environments . Narrow metaxylem vessels and higher root biomass provide 1RS translocation wheats with better adaptability to water stress and make them better performers for grain yield. Plant development is particularly sensitive to light, which is both the energy source for photosynthesis and the regulatory signal . Upon germination in the dark, a seedling undergoes a developmental program named skotomorphogenesis, which is characterized by elongated hypocotyl, closed cotyledon, apical hook, and short root. Exposure to light promotes photomorphogenesis, which is characterized by short hypocotyl, open cotyledon, chloroplast development and pigment accumulation . In addition to light, photomorphogenesis is also regulated by several hormones, including brassinosteroid , auxin, gibberellin and strigolactone .

The molecular mechanisms that integrate the light and hormonal signals are not fully understood. Light signal is perceived by photoreceptors, which regulate gene expression through several classes of transcription factors . Downstream of photoreceptors, the E3 ubiquitin ligase COP1 acts as a central repressor of photomorphogenesis . COP1 targets several transcription factors for proteasome-mediated degradation in the dark . Light-activated photoreceptors directly inhibit COP1’s activity, leading to the accumulation of the COP1- interacting transcription factors, such as HY5 , BZS1, and GATA2, which positively regulate photomorphogenesis . Recent studies have uncovered mechanisms of signal crosstalk that integrate light signaling pathways with BR, GA, and auxin pathways . The transcription factors of these signaling pathways directly interact with each other in cooperative or antagonistic manners to regulate overlapping sets of target genes . BR has been shown to repress, through the transcription factor BZR1, the expression of positive regulators of photomorphogenesis, including the light-stabilized transcription factors GATA2 and BZS1 . BZS1 is a member of the B-box zinc finger protein family, which has two B-box domains at its N terminus without any known DNA binding domain . It is unclear how BZS1 regulates gene expression. Recent studies have shown that SL inhibits hypocotyl elongation and promotes HY5 accumulation in Arabidopsis plants grown under light , but the molecular mechanisms through which SL signaling integrates with light and other hormone pathways remain largely unknown. Immunoprecipitation of protein complexes followed by mass spectrometry analysis is a powerful method for identifying interacting partners and post translational modifications of a protein of interest . In particular, research in animal systems has shown that combining stable isotope labeling with IP-MS can quantitatively distinguish specific interacting proteins from non-specific background proteins . Stable isotope labeling in Arabidopsis has been established as an effective method of quantitative mass spectrometry ; however, combination of SILIA with IP-MS has yet to be established. To further characterize the molecular function of BZS1, we performed SILIA-IP-MS analysis of the BZS1 protein complex, and identified several BZS1-accociated proteins. Among those are COP1, HY5, and BZS1’s homologs STH2/BBX21 and STO/BBX24. We further showed that BZS1 directly interacts with HY5, and positively regulates HY5 RNA and protein levels. Genetic analysis indicated that HY5 is required for BZS1 to inhibit hypocotyl elongation and promote anthocyanin accumulation. In addition, BZS1 is positively regulated by SL at both transcriptional and translational levels. Plants over expressing a dominant-negative form of BZS1 show an elongated-hypocotyl phenotype and reduced sensitivity to SL, similar to the hy5 mutant. Our results demonstrated that BZS1 acts through HY5 to promote photomorphogenesis and is a crosstalk junction of light, BR and SL signals. This study further advances our understanding of the complex network that integrates multiple hormonal and environmental signals.

The pH dependence of bulk nanobubble formation can also be analysed using this equation

However, as recently reported by Ushikubo, nanobubbles of inert gases do possess similar lifetimes and are formed from helium, neon, and argon, and since the only intermolecular forces of note they experience are van der Waal’s forces of attraction, Lifshitz forces and dipole-dipole interactions, it can be assumed that these are also strong enough, and the gases sufficiently inert, for the same mechanism as well as the steric hindrance of the hydroxide ions to apply to the same case. Considering the formation of a 1 μm micro-bubble which eventually shrinks into a nanobubble, the number of ions available to it for stabilisation from the water it displaces upon formation, at pH 7, is approximately 33 ions, which if all the ions were adsorbed, does not agree with the zeta potentials reported by Takahashi et. al. for micro-bubbles of comparable size, which by equation is given to be approximately 495 ions. It follows that the ions which are adsorbed diffuse toward the nanobubble surface from the surrounding bulk fluid, which can explain the apparent generation of free radicals observed by Takahashi et. al., since there is now a minuscule concentration difference present to drive the diffusion. The availability of hydroxide ions also depends on the pH, and at pH 7 it is thus possible for stable nanobubbles to form as is reported by Ushikubo, as well providing a mathematical treatment for their stabilization and the calculation of their surface charge. At lower pH, in the absence of other ions, the concentration of stabilized ions would be lower due to the lower availability of hydroxide ions and the increased time needed for them to diffuse to the surface of the nanobubble, allowing it more time to shrink. The dependence of the size of the bulk nanobubble on external pressure is given by equation . Of the external pressure, the proportion of the atmospheric pressure to the total value of the actual pressure, the rest being the pressure exerted by the fluid. However, the major component to the force contributing to the shrinkage of the nanobubble is the surface tension,dutch buckets system which also increase with the size of the nanobubble. Thus, for higher external pressures and given that a limited amount of gas is dissolved in the fluid, the equation gives a trend of increasing nanobubble size with increasing external pressure.

However, due to the limited amount of gas available, it is expected that the number of nanobubbles formed, i.e. concentration will decrease, while still giving higher particle size. This is confirmed by Tuziuti and co-workers through their observations of air nanobubbles in water. The temperature term appears only in the term that describes the internal pressure, causing a linear increase with temperature, not taking into account the increase in molecular motion due to heat, as well the increased energy of the surface ions. Thus, it also shows that the internal pressure will increase with the increase in temperature. This will, in turn, cause a reduction in the radius if all other terms are kept the same. Thus, we can say that given a limited amount of gas dissolved in the solvent, an increase in temperature will give smaller nanobubbles, but will also cause an increase in concentration of the nanobubbles in the solvent. It is also possible that zeta potentials may decrease, as thermally agitated hydroxide ions may be more susceptible to de-adsorption and may return to solution more easily. Conversely, as lower temperatures, larger bubbles may form, especially by the method of collapsing micro-bubbles, and larger numbers of hydroxide ions may be adsorbed on the surface of the nanobubble, giving longer lifetimes. Bulk nanobubbles are, in essence, minuscule voids of gas carried in a fluid medium, with the ability to carry objects of the appropriate nature, that is, positively charged for a length of time that is significant, if the nanobubble is left alone, yet is also controllable, since the bubbles can be made to collapse with ultrasonic vibration, or magnetic fields. The applications, then, seem to be limited only by how we can manipulate and design systems that make use of these properties for new technology in several fields. As mentioned before, thus far technology has made use of the uncontrolled collapse and generation of bulk nanobubbles, in the fields of hydroponics, pisciculture, shrimp breeding, and algal growth, while the property of emission of hydroxide ions during collapse has been applied to wastewater treatment.

Here and there, there are indications of greater possibilities, as evidenced by research into their ability to remove microbial films from metals, to remove calcium carbonate and ferrous deposits from corroded metal, the use of hydrogen nanobubbles in gasoline to improve fuel efficiency, and the potential application for to serve as nucleation sites for crystals of dissolved salts. The following sections elaborate on further applications which are possible in the near future. Proton exchange membrane fuel cells, are finding wide application in several fields due to the ease of their deployment, the low start-up times, and the convenience of their size and operating temperatures . However, significant limitations exist for their wider application, which can broadly be classed under the headings of catalysis, ohmic losses, activation losses, and mass transfer losses. The first of these is due to the rate of catalysis of the splitting of hydrogen, which cannot be pushed beyond a certain limit due to the constraints of temperature. But the larger issue is the cost of the catalyst itself, which is a combination of platinum nanoparticles and graphite powder, which provides the electrical conductivity. The inclusion of platinum presents a significant cost disadvantage, and while efforts are ongoing to reduce or replace platinum as a catalyst, these are still experimental and much research is ongoing in this field. The second limitation is due to ohmic losses, which accumulate due the proton exchange membranes, also termed the electrolyte, and can only be reduced by reducing the thickness of the membrane. Current popularly used membranes are usually made of Nafion, a sulphonate-grafted derivative of polytetrafluoroethylene marketed by DuPont, but experimental membranes include the use of graphene, aromatic polymers, and other similar materials which possess a high selective conductivity toward protons [ref]. However, beyond a certain thickness the membranes are unable to mechanically support themselves, and often mechanical failure of the membrane will cause a break in operations.The third limitation is due to the start-up conditions of the fuel cell, and are a matter of the mechanics of operation of the fuel cell itself. The last limitation is due to the transport of hydrogen and oxygen to the triple phase boundaries around the catalyst and the transport of water away from them, and is a significant concern for the operation and efficiency of PEMFCs.

However, the current PEMFCs depend on gaseous hydrogen and oxygen, which are released from a compressed source and derived from air respectively. This necessitates a mechanically strong membrane and construction to resist the operating pressures. However, the inclusion of the gas as a nanobubble dissolved in water presents new possibilities, used in combination with microfluidic technology. It becomes possible to also replace both membranes and catalysts with materials that have been hitherto discarded fro being too mechanically weak, such as graphene, and the possibility of using graphene as a combined catalyst and proton exchange membrane, as nanobubbles of hydrogen and air, dissolved in water, to act as the reservoirs for the fuel and oxidant. Such as system would operate on the basis that nanobubbles are negatively charged,dutch buckets and would hence be attracted to the graphene through which current would be passed in order to activate the process. Air and hydrogen nanobubbles would be separated by the graphene membrane, and be adsorbed to opposite sides of it. The graphene membrane would also have a potential difference applied across it in the plane of the graphene layer. This would, in turn, permit the hydrogen to be catalyzed to protons [ref], and hence be conducted across the graphene [ref], allowing it react with the oxygen to form more water, which would be carriedaway with the flow. Microfluidic bipolar plates would enable the construction of such a device, and such fuel cells could become the future source of energy for several applications. The advantages of such a system would be numerous. Firstly, graphene is far cheaper than platinum, and can be used as a catalyst of almost comparable quality, in addition to also being the conductor for the removal of electrons released during catalysis. Secondly, the thickness of a graphene sheet is in the range of nanometers, which would mean that ohmic losses would, quite possibly, be nearly eliminated. Additionally, due to the flow of water as a solvent, the losses due to the mass transport of water away from the triple phase boundary, and that due to transport of hydrogen and oxygen to the triple phase boundary, would also be significantly reduced. The last, but not the least advantage would be the reduction in the size of one fuel cell. The voltage generated by fuel cells is independent of the size they are, which would mean that a much larger number of fuel cells can fit in the same area as currently applied fuel cells, which will provide a much larger voltage. Polymeric foams have been a staple of several products since their inception, and pore size is one of the key properties of the foam that determines its performance. In general, the larger the pore size, and more the pore count, the lighter the foam is. However, both can come at the cost of reduced wall thickness of the pores, which makes the whole foam less able to deform elastically and more susceptible to tearing and heat damage, while substantially reducing fatigue resistance and creep resistance. In general, therefore, the standard practice is to achieve a balance between pore size and pore count, measured in pores per volume, so as to achieve the desired properties. However, the voids rarely go below the size of one micron, and this in turn places a limit on the number of pores per volume, thus limiting the number of pores it is possible to introduce, as well as the amount of gas that can be introduced into the foam system. While there are several methods of foam manufacturing, including in-situ foam molding, and pre-mixed foam molding, none of these offer pore sizes lower than a few microns reliably and controllably. Furthermore, many of the polymers used in the construction of these foams can either be dispersed or dissolved into water, such as polyamides, polystyrene, polyesters, and polyurethanes. This offers a unique opportunity to introduce nanobubbles into the system, by first dispersing the gas into solution by means of a micro-bubble generator, and then dispersing the polymer, either in dilute solution form or as a monomer, and then either coagulating the dispersion, or polymerizing it, or cross linking the chains in solution to create a foam with pore sizes in the nanometer range. At standard pore counts, this would offer a very high wall thickness, which necessitates a large increase in the concentration of nanobubbles which should be introduced to return wall thickness to the same levels as a microporous foam. The pores can then be opened, if so desired, by a microneedle array, or by other methods such as guided bursts of ultrasound, creating such structures as channels only nanometers in width through the foam, and presenting new possibilities for water filtration and purification, as well as for testing and for further application in water quality testing and other similar applications. As of now, there are several applications for such open-celled foams, such as the production of nanopure water, which are expensive due to the filtration equipment needed. Thus, opencelled polymeric foams have direct application to these areas, where as closed-celled foams are potentially lighter and stronger, as well as tougher than other foams with larger pores and lower pore counts. Thus, it is reasonable to suggest that nanobubble technology will find widespread use in this particular application, especially when the cost factor is taken into account.

CB has been deemed persistent in the environment but with a low potential for bioaccumulation and toxicity

Root exudation may also be altered after nanomaterial exposure.In addition, adsorption of nanomaterials to bacterial cell surfaces has been reported to disperse nanomaterial agglomerates.Such processes and other soil characteristics could cause temporal variations in CNM behavior within the natural soil environment, including differentially over the course of plant growth. The results of the CNM concentration-dependent agglomeration in aqueous soil extracts qualitatively explained the observed inverse dose–response trends, which deviate from typical sigmoidal dose–response relationships reported for toxicants that dissolve in soil water , but quantitative tests are not possible because of the complex soil characteristics and dynamic processes described above. In this study, with nondissolving but agglomerating CNMs, small amounts of CNMs in moist soil did not agglomerate but rather remained suspended in soil water where they were more bio-available and impactful to soil microbes and plant roots. With larger amounts of CNMs in moist soil, large agglomerates formed, which led to a sharp decrease in their bio-availability and observed impacts . Although the inverse dose–response patterns were mostly shared across CNMs, the relationships were linear for CB and fit a power function for MWCNTs . Differences in agglomeration and possibly differing toxicity mechanisms could explain the differing model fits. Our results demonstrate that not only the mass concentration and primary particle size but also the level of agglomeration may play critical roles in determining CNM effects on plants and their root symbioses in soils. In prior microbial toxicity and hydroponic phytotoxicity studies, it was recognized that nanomaterial effects would increase as nanomaterial size decreases but would decrease as nanomaterials agglomerate. For instance, antimicrobial activity was found to be higher for smaller versus larger graphene oxide sheets,while debundled, short,nft system and dispersed MWCNTs were demonstrated to have relatively higher bacterial cytotoxicity due to enhanced MWCNT–cell contact.Depicted as “nano darts”, individually dispersed single-walled carbon nanotubes were reported to induce more bacterial death than SWCNT aggregates, as dispersed SWCNTs directly damaged bacterial cell membranes.

In hydroponic studies, dispersed MWCNTs were found to have stronger effects on tomato plants than MWCNT agglomerates.Even when comparing among agglomerates, small MWCNT agglomerates exerted stronger impacts to Arabidopsis T87 cells than large agglomerates.Still, the dose–response relationship for unstudied low concentrations, which are, across the herein unstudied range of 0 to 0.1 mg kg−1, is uncertain. It is possible that the whole-plant N2 fixation potential decreased continuously with CB concentration until 0.1 mg kg−1 . Alternatively, there could be a threshold concentration somewhere between 0 and 0.1 mg kg−1, possibly close to the lowest studied dose , above which the inhibition of the whole-plant N2 fixation potential occurred but below which it did not . There is uncertainty in such untested low concentration regimes. Such uncertainty reinforces the challenges in extrapolating toxicological results from studies using only high nanomaterial concentrations to low concentration exposure scenarios, owing to influential effects of nanomaterial physicochemical structuring.We chose multi-walled carbon nanotubes and graphene nanoplatelets as two representative engineered CNMs, with industrial carbon black for comparison. CB has been commercialized for decades in the rubber and pigment manufacturing industries,with annual production of over 10 million metric tons.However, there is evidence that CB may have similar or higher toxic effects on soil bacterial communities and amphipods compared with other CNMs.Therefore, assessing whether CB affects soybean and N2 fixing symbioses and comparing how the effects differ from those of MWCNTs and GNPs are important from an environmental regulatory standpoint. MWCNTs and GNPs were purchased from Cheap Tubes Inc. ; carbon black was purchased from Dorsett & Jackson Inc. . Besides the manufacturer reported properties , CNMs were characterized by transmission electron microscopy , thermogravimetric analysis , and inductively coupled plasma optical emission spectroscopy for material morphology, thermal stability, overall purity, and metal composition, following previously reported methods.The CNMs were used as received without further purification.

Three concentrations of MWCNTs, GNPs, and CB were evaluated in this study. A sequential 10-fold dilution method accompanied by mechanical mixing was used to prepare homogenized soil and CNM mixtures as reported previously.The mixing was performed using a hand-held kitchen mixer, from the low to the high CNM concentration treatments, with the mixer cleaned between different CNMs to avoid contamination. The cleaning procedure followed guidelines recommended by the National Institute for Occupational Safety and Health for cleaning surfaces contaminated with carbon nanotubes.CNM dry powder was weighed and amended directly into soil in concentrations of 0.01, 10, and 100 g kg−1 . Each mixture was blended thoroughly using the mixer for at least 10 min. These CNM–soil stocks were then diluted ten times by the addition of unamended soil and mixing by the mixer similarly as above, resulting in concentrations of 0.001, 1, and 10 g kg−1. The dilution and mixing were repeated again to achieve the final CNM working concentrations of 0.1, 100, and 1000 mg kg−1. The CNM–soil mixtures were stored prior to planting.Bradyrhizobium japonicum USDA 110 was initially streaked from frozen stock glycerol onto solid modified arabinose gluconate medium24 with 1.8% agar in a Petri dish, then cultivated in the dark. Following incubation, several discrete colonies were dispersed into 4 mL of liquid MAG medium. An aliquot was inoculated into a 500 mL glass flask containing 100 mL of liquid MAG medium and incubated in the dark for 5 d until stationary growth phase. Aliquots of the culture were dispensed into centrifuge tubes and centrifuged , and the supernatant was discarded. Cell pellets were resuspended in a 1 M MgSO4 solution to an optical density at 600 nm of 1.0 to serve as the inoculum during seed planting. Soybean seeds were purchased from Park Seed Co. . Seeds were inoculated with B. japonicum following the method of Priester et al.Specifically, seeds were soaked in the B. japonicum inoculum for 10 min and deposited into rehydrated peat-filled seed starter pellets at 1/4-in. depth using forceps. An aliquot of the B. japonicum inoculum was dispensed into the pellet holes over the planted seed; the seed plus additional inoculum were then covered with a thin layer of the peat pellet substrate. The pellets were watered daily and incubated on a heating mat . Each planting pot was comprised of a 3 qt high density polyethylene container with bottom perforations, which was lined with polyethylene WeedBlock fabric at the bottom, and overlain by 400 g of washed gravel to allow water drainage.

A polyethylene bag punched with 40 evenly spaced 5 mm holes was placed over the gravel, and 2.3 kg of soil was weighed into each bag. Perforation of the bags allowed for water drainage, thereby preventing root rot within the soil-filled bags. Overall, there were 10 treatments, including three concentrations for each of CB, MWCNTs, and GNPs, plus a control soil without nanomaterial amendment. There were eight replicate pots per treatment. Ten days after seed sowing, 80 VC stage 59 seedlings were transplanted into potted soils. Prior to transplanting, the outside mesh of the starter pellets was removed carefully to minimally disturb the seedling roots. A central planting hole was formed in the soil, into which B. japonicum inoculum was dispensed. One seedling was inserted into the hole, and another aliquot of B. japonicum inoculum was dispensed onto the surface. Both inoculation steps were deemed necessary for adequate contact between B. japonicum and the soybean roots and thus effective inoculation. The filled transplanting hole was covered by a thin layer of soil, and the potted soil surface covered by a layer of WeedBlock fabric to minimize soil surface crusting and weed growth. A wooden support stake was inserted against the inside wall of each pot for later plant support by tying, as needed. After transplanting, the plants were grown for another 39 d to the R6 stage in the Schuyler Greenhouse at the University of California at Santa Barbara. The greenhouse climate was controlled using VersiSTEP automation under full sunlight. The indoor air temperature ranged from 15 to 34 °C,hydroponic gutter and the indoor photosynthetically active radiation fluctuated between 21 and 930 μmol m−2 s −1 from nighttime to daytime. Soil moisture sensors were inserted to a depth of 13 cm into the soil of seven pots to monitor soil volumetric water content, electrical conductivity, and temperature. Data were recorded at least twice daily using a ProCheck data display . Pots were watered to retain an average soil volumetric water content of 0.25 m3 m−3 .Midori Giant is a determinate soybean variety, which stops vegetative growth soon after flowering initiates.Also, N2 fixation will accelerate when plants initiate pod development. Therefore, plants were harvested at each of two stages: intermediate or final , aimed at capturing CNM effects on plant vegetative growth with early nodule formation, and then reproductive development with highest N2 fixation potential. Three replicate plants from each of the ten treatments were sacrificed at the intermediate harvest , and five replicates were sacrificed at the final harvest , when plants reached stage R6 .At harvest, plants were separated, above ground from below ground, by cutting the stem at the soil surface using a single edge razor blade. The above ground part was further divided into stem, leaves, and pods . Leaves and pods were counted and arranged according to their sizes, then photographed. Total leaf area and pod size were further quantified by analyzing the images using Adobe Photoshop software.Sub-samples of fresh leaves and pods were weighed and then stored for future analyses.

The remaining tissues were transferred to separate paper bags, then weighed before and after drying to determine wet and dry biomass plus gravimetric moisture content. The below ground plant parts were removed from the pot within the polyethylene bag surround. The soil in the bag was gently loosened from around the roots and nodules using a metal Scoopula , while minimizing root system disturbance. The relatively intact below ground parts, including roots and nodules, were rinsed in deionized water thoroughly to remove remaining attached soil, then air-dried. The nodules were carefully excised from the roots using a single edge razor blade and forceps as reported previously.Nodules were counted; sub-samples were weighed and refrigerated for later TEM analysis. The remaining nodules were weighed and then analyzed immediately for N2 fixation potential. Roots were dried and massed as above, to determine gravimetric moisture content and dry biomass. After N2 fixation potential measurements, nodules were also similarly dried and massed. After acquiring dry masses, all dried plant parts were archived for future analyses. Sub-samples of soil from each pot were collected and stored for future analyses. The N2 fixation potentials of root nodules were measured as nitrogenase activity by the acetylene reduction assay, according to standard methods with some modifications.Pure acetylene gas was generated by the reaction of calcium carbide and deionized water in a 1 L Erlenmeyer flask, with C2H2 collected into a 1 L Tedlar bag . Intact nodules that were freshly excised from cleaned plant roots were placed into a 60 mL syringe with a LuerLok Tip and incubated with 10% C2H2 . At 0, 15, 30, 45, and 60 min, 10 mL of the gas sample in the syringe was injected into an SRI 8610C gas chromatograph with a sample loop to measure the C2H2 reduction to ethylene over time. The GC was equipped with a flame ionization detector and a 3 ft × 1/8 in. silica gel packed column. Helium was used as the carrier gas at a pressure of 15 psi . Hydrogen gas and air were supplied for FID combustion at 25 and 250 mL min−1, respectively. The oven temperature was held constant . The C2H4 peak area and retention time were recorded using PeakSimple Chromatography Software . Chemically pure C2H4 gas was diluted by air and measured to establish a C2H4 standard curve . The C2H4 peak area values were converted to C2H4 concentrations against the standard curve and further to moles of C2H4 using the ideal gas law assuming ambient temperature and pressure. For each analysis, the moles of C2H4 produced were plotted over time, and the relationship was evaluated for linearity, then fitted by a linear regression model to calculate the C2H4 production rate. The N2 fixation potential was calculated as the C2H4 production rate normalized to the assayed dry nodule biomass.