Category Archives: Agriculture

New Chinatown still exists as a tourist attraction and remains a center of local Chinese American life

Representations of Chinatown defined the cultural possibilities of citizenship for Chinese Americans in the same way the law defined the possibilities of legal citizenship. During the Chinese Exclusion Act era , there remained real political and material stakes to the way Chinatown was popularly portrayed. For at least half a century, media elite and leaders in Los Angeles had portrayed Old Chinatown as a site of tong violence, illicit drug use, and prostitution. These stereotypes of Chinatown were rooted not just in ideas of race, but also in perceived differences of gender and sexuality. Images of vice and corruption were a direct result of popular representations that depicted Chinatown as a community of bachelors living together in an all male social world. The few women in the community were usually portrayed as prostitutes. Thus, Chinatown was popularly linked with a deviant form of sexuality that challenged the normative ideas of the white middle class family united in Christian marriage.3 Furthermore, many white residents of Los Angeles believed that the built environment of the Chinatown contributed to this vice. Stories of an underground network of lairs and secret tunnels facilitated the idea that Chinatown lay outside the vision and control of white authorities. New Chinatown in Los Angeles built on prior efforts by the Chinese American merchant class throughout North America to redefine the place of Chinatown in the popular imagination. Beginning with the Chinese Village at the 1893 World’s Columbian Exposition, continuing on through the reconstruction of San Francisco’s Chinatown following the 1906 Earthquake and fire, Chinese American merchants challenged notions of Chinatowns as disease-ridden slums and refashioned them into spaces of commerce that catered to white tourists. 4 During this time period, Chinese American merchants served as cultural brokers, whose position between white tourists and the vast majority of working-class Chinese Americans allowed them to consciously transform these segregated ethnic communities into sites that presented their own vision of Asia to the outside world. This was done in a way that challenged notions of Chinatowns as manifestations of Yellow Peril while monetizing these sites in a way that allowed Chinese American entrepreneurs to make a living.

In New Chinatown,plant pot with drainage local Chinese Americans merchants took concepts pioneered in San Francisco’s Chinatown and in world’s fair expositions and saw them through to their logical end. In fact, New Chinatown was not a neighborhood at all but a corporation, the stock of which was privately held by a select group in the city’s emerging Chinese American middle class.5 These merchants and restaurant owners maintained complete control over their new Chinatown. From the land on which the business district was built, to the architectural style that accompanied the area’s businesses, to the advertisements that publicized the district in the city’s papers, New Chinatown reflected the desires of its owners to both attract tourist and to challenge the conceptions that had come to dominate Old Chinatown. The opening day festivities of New Chinatown featured appearances by local Chinese American actors who had made a name for themselves in the China-themed films of the 1930s.6 Following the Japanese invasion of Manchuria in 1931, Hollywood began producing a series of Chinese-themed films many of which featured Chinese American performers from the Los Angeles area. The most high profile of these films was MGM’s The Good Earth , a film based on Pearl S. Buck award winning 1931 novel. Present at the opening of New Chinatown were Keye Luke and Soo Yung, Chinese American actors with supporting roles in The Good Earth. Also present was Anna May Wong, the most recognizable Chinese American star of the period. Despite being passed over for a role in The Good Earth, Wong had already appeared in number of high profile films including Thief of Bagdad , Piccadilly , and The Shanghai Express . New Chinatown would soon feature a willow tree dedicated to Ms. Wong. To complete the Hollywood connection, the New Chinatown opening featured an art exhibit by Tyrus Wong, a Hollywood animator who would later work on the classic animated film, Bambi . Despite these connections to Hollywood, in many ways New Chinatown attempted to cast itself as the modern Chinese American alternative to the representation of China seen in films like The Good Earth. The opening gala included flags for both the Republic of China and the United States spread around district.

The parade featured four-hundred members of the Federation of Chinese Clubs, local Chinese American youth, most of whom were American-born who had banded together to raise financial support for China following the outbreak of the SinoJapanese War in 1937.7 At the same time, a number of prominent state and local officials participated in the festivities including Governor Merriam who was then locked in a difficult reelection campaign and who hoped that his participation could would solidify the small but not insignificant Chinese American vote. In these complex and hybrid ways, the founders positioned New Chinatown as a distinctly Chinese American business district, one that reflected the increasingly U.S.-born demographics of the nation’s Chinese American community. New Chinatown was not the only Chinatown to open in Los Angeles in the summer of 1938. Two weekends earlier, less than a mile away, a group of white business leaders headed by philanthropist Christine Sterling had opened their own competing Chinatown, which they dubbed China City.8 If New Chinatown was defined by the ethos of the American-born generation, China City was defined by Hollywood. This was to be a Chinatown that embodied the images that film audiences saw when they entered the theaters to watch Chinese and Chinatown themed films so popular in the 1930s. New Chinatown may have drawn on Hollywood actors to publicize its existence, but China City in many senses was a Hollywood production. Like New Chinatown, this was a business district not a neighborhood, but unlike New Chinatown, China City adhered much more closely to the Orientalist images of China produced by Hollywood cinema. In China City visitors could attend The Bamboo Theater featuring continuously running films about China. They walk through a recreation of the set for the House of Wang from The Good Earth. Many of the Chinese Americans employed in China City had also worked as extras on the MGM film.

And so tourists might encounter some of the very people that had seen in the background shots of the film. In China City, tourists could pay to be drawn around by rickshaw. According to the Los Angeles Times, visitors to China City could purchase “coolie hats, fans, idols, miniature temples, and images.”9 One of the shops was owned by Tom Gubbins,pots with drainage holes a local resident of Chinatown who supplied Hollywood with costumes and props for Chinese themed films and connected local residents with jobs as extras. In both New Chinatown and China City, Chinese Americans utilized Chinatown to mediate dominant ideas about race, gender, and nation.10 These two Chinatowns were more than physical sites for members of ethnic enclave to make a living. They also represented the apparatus through which the local Chinese American community performed their own cultural representations of China and Chinese people to crowds of largely white visitors. In more ways than one, Chinese American performances in these two districts were the culmination of a fifty year process through which the Chinese American merchant class challenged Yellow Peril stereotypes by transforming China and Chinese culture into a nonthreatening commodity that could be sold to white tourists. Examining a period of national debate over immigration and U.S. citizenship, this dissertation, “Performing Chinatown: Hollywood Cinema, Tourism, and the Making of a Los Angeles Community, 1882-1943,” foregrounds the social, economic, and political contexts through which representations of Chinatown in Los Angeles were produced and consumed. Across five chapters the dissertation asks: To what extent did popular representations and economic opportunities in Hollywood inform life in Los Angeles Chinatown? How did Chinese Americans in Los Angeles create, negotiate, and critically engage representations of Chinatown? And in what ways were the rights of citizenship and national belonging related to popular representations of Chinatown? To answer these questions, the project examines four different “Chinatowns” in Los Angeles—Old Chinatown, New Chinatown, the MGM set for The Good Earth, and China City—between the passage of the Chinese Exclusion Act in 1882 and its repeal in 1943 during the Second World War. The relationship between film and Chinatown stretches back to the 1890s to a moment when both featured as “urban amusements” for a newly developing white urban public audience in places like New York, and yet the connection between Chinatown and film reached its nadir in Los Angeles in the 1930s during the height of the Hollywood studio system.

San Francisco and New York Chinatown may have been larger in size and attracted more tourists, but Los Angeles Chinatown and the Chinese American residents of the city played a more influential role in defining Hollywood representations of China and Chinese people than any other community in the United States. Long before the outbreak of World War II, the residents of Los Angeles Chinatown developed a distinct relationship to the American film industry, one that was not replicated anywhere else during this period. Despite this distinct relationship, there have been no dissertations or academic books published about Los Angeles Chinatown and its relationship to Hollywood cinema. Asian American historians who work on Los Angeles have for the most part focused on the city’s Japanese American population. 11 Sociologists of the region have focused on Asian Americans in the ethnoburbs of the San Gabriel Valley.12 Film studies scholars who examine Asian American representations have focused primarily on the films themselves or else on writing biographies of a few well-known Hollywood performers such as Anna May Wong, Philip Ahn and Sessue Hayakawa. 13 With professional academics focused on different but related topics, nearly all of the research that has been done on the history of Chinese Americans in Los Angeles and their relationship to Hollywood film has been completed by community historians at organization like the Chinese Historical Society of Southern California and the Chinese American Museum of Los Angeles. 14 Most of these community historians are volunteers who research and write because of their passion for the subject matter. Many also have family ties to this history. This familial link is the case with the most popular retelling of this history, Lisa See’s novel Shanghai Girls. Lisa See is a descendant of the Chinese Americans who lived in Los Angeles before World War II. 15 In contrast, professional academics for their part have all but ignored this history. What accounts for the relative absence of scholarship on the relationship between the Chinese American community of Los Angeles and the Hollywood film industry? Certainly, the topic of Chinatown remains one of the most thoroughly studied aspects of the Asian American experience. Alongside scholarship examining the political and legal apparatuses used to exclude Asian people from the US, Chinatown is one of the few topics in Asian American studies that elicited significant scholarly consideration before the birth of the field in the late 1960s.16 More than a dozen monographs have been produced examining various aspects of Chinatowns from the fields of sociology and history. In the popular realm, interests in Chinatown as a site of tourism and as a cultural representation also remains strong. In addition to the long-standing interest in Chinatown as an academic topic, the material traces of this history remain highly visible. Films like Shanghai Express , Lost Horizon and The Good Earth , which all employed Chinese American background performers, are available for home viewing. Photographs from Chinatown performances of this period including those of the Mei Wah Drum Corps have been digitized and are available on-line through archives such as those of the Los Angeles Public Library and their Shades of L.A. project. And yet, the distinct theoretical, methodological, and disciplinary tenants of sociology social history, and film studies have limited the types of questions scholars have asked about Chinatown and film, and by extension the types of conclusions these scholars have drawn.

Diabetic treatment under the new initiative had objectives similar to those of the asthma component

Utilizing the collaborative technique enabled the primary care practice teams to make many changes in the way they cared for patients with chronic illness. It was concluded that the evidence suggested improvements in patient outcomes resulted from this intervention.Subsequent to the late 1990s, more evidence in support of the model appeared. Due to the general popularity of the model, in 2001 ICIC’s three-year Targeted Research Grants Program provided funding for peer-reviewed, applied research that focused on addressing critical questions about the organization and delivery of chronic illness care within health systems. Nineteen projects were selected, providing grants totaling approximately $6 million dollars backed by the Robert Wood Johnson Foundation. The research included evaluations of interventions such as group visits or care managers, observational studies of effective practices, and the development of new measures of chronic care. The settings for these studies were primarily community or private health care. Identifying the types of organizations that fare better at improving outcomes for particular disease states continues to be a question for the literature . The not-for-profit and private sectors continue to embrace the CCM, and organizations like the ICIC continue to devote resources to its development and ability to improve on patient health outcomes. In 2001, the Institute of Medicine published what is now considered a seminal report in the field: Crossing the Quality Chasm: A New Health System for the 21st Century . In the report, the Institute of Medicine outlines six goals for the transformation of health care in the United States. The report specifically references the work of ICIC and calls upon lawmakers at the federal level to make chronic disease care quality improvement a priority issue. Following suit,vertical plant growing the National Committee on Quality Assurance and the Joint Commission, two nationally recognized not-for-profit entities that set standards for care in the United States, developed accreditation and certification programs for chronic disease management based on CCM .

At the same time, both the Joint Commission and the National Committee on Quality Assurance have released additional accreditations in the patient centeredness approach of the patient centered medical home. These new certifications continue those proposed by CCM and advance the work of these pioneers. Joint Commission’s Primary Care Medical Home looks at organizations that provide primary care medical services and bases its certification on elements that enable coordination of care and increase patient self-management. This is a model of care based directly on the foundational work provided by CCM. Additionally, CCM currently serves as a foundation for new models of primary care asserted by the American College of Physicians and the American Academy of Family Practice. In 2003, the ICIC program administrators convened a small panel of chronic care expert advisors and updated CCM to reflect advances in the field of chronic care from both the research literature and from the experiences of health care systems that implemented the model in their quality improvement activities. These programs were phased in during early June 2009. The asthma component sought to improve asthma care . Additionally, it had the objective of improving asthma outcomes .The objectives of the diabetes component of the program differed from the asthma module in that the program did not focus on the reduction of diabetes related deaths. Practice reviews did not identify diabetics as having an abnormally high mortality rate; however, improvements were sought in the numbers of hospitalizations and specialist treatment visits. While both chronic care conditions were intriguing areas of study for the program’s implementation, this paper focuses on the diabetic portion of the implementation because the earlier phase of asthmatic treatment did not result in sufficient data to enable proper analysis. During the preparatory stage of the Chronic Care Initiative , a not-for-profit consulting organization with correctional health care and learning collaborative experience was selected to assist the California Prison Health Care Services project team.

A statewide system assessment was conducted between January and April in 2008. Given the small window of opportunity under the federal receivership to accomplish the turnaround plan of action’s objectives, a very aggressive work plan and timeline was developed. To develop the work plan and identify potential problem areas, the team first established a list of limiting factors relevant to the operational environment. It was believed that in developing this list, the institutionalized nature of the organization and its key players could be catalogued. The factors could be utilized to address areas in which proactive focus and intervention efforts would be required in order to enable successful change on the part of long-tenured civil servants. The long-tenured employees were not capable of seeing all the flaws of their own routinized behavior because they had known no other ways. The theory under which the team operated was adopted from the above and related research on organizational change. Fernandez and Rainey discuss managing change once the change plan has been implemented and tasks are underway. To be innovative, the CCI team sought ways to stay ahead of the change curve and thus looked to capture variables of interest related to places where proposed change could get stuck by administrators unable to see how their usual behaviors and actions prevented successful change management. As a result, the plan that was developed included tasks specific to the implementation of the chronic care model in the health care setting. The team, in its proactive approach to implementation, identified aspects of organizational behavior that were important to track on the management side and designed methods to track and trend this behavior. Once tracked and trended, these data were used to develop interventions on managers to motivate their behavior in ways the team felt would enable the long-term success and sustainability of the changes at hand. Further, the catalog of behavior or aspects within the environment that were known to have likely deleterious effects on the proposed changes was used to redevelop the private sector chronic care model itself.

Revisions to the private-sector version of the chronic care model were necessary to fit the model for a custodial setting. With health care needs put behind those of security, the program architects found it necessary to modify and enhance aspects of the elements of the model. The first and perhaps least profound change was to the name of the program—to “Chronic Disease Management Program”—to avoid the perception that the inmate population would receive levels of care provision higher than enjoyed by the community at large because the program actually aimed to achieve a reduction in the cost of care while maintaining clinical efficacy of delivery and treatment. As a solely political move, it set the stage for the requirements of alterations to the rest of the model. Subsequent to discussions concerning the program’s name, each of the model’s standard elements were analyzed and repacked to fit the correctional environment. Due to the lack of learning collaborative and quality improvement information in the correctional health care literature, an innovative two-phase approach to implementation was developed. Phase 1 focused on piloting the learning collaborative strategy, developing a modified diabetes-change package for a correctional environment, and establishing the pilot sites to test the model. Phase 2 had the objectives of statewide implementation of the tested and approved approach from the pilot, while additionally moving on to the next chronic condition for the initial six pilot sites. After identifying the pilot sites,vertical farming the initiative began with intensive, multidisciplinary work sessions. Subsequent work sessions were performed using an enhanced learning collaborative strategy. Collaborative sessions were planned quarterly for the first year with teams from different sites attending four, two-day learning sessions separated by action periods. An intensive skills-based course on quality improvement was embedded into the learning sessions. Additionally, virtual learning workshops were inserted between the learning sessions to enable each collaborative to build workforce competencies on quality improvement technical skills. At the end of the learning sessions, pilot site teams folded into three regional learning collaboratives involving all 33 prisons to commence Phase 2 activities. The pilot-site champions served as presenters or mentors to the new sites during Phase 2, in a “train-the-trainer” approach. This approach required an initial round of training, and those trained during the first round were then deployed to train the rest of the staff. Figure 3 shows the culturally embedded barriers to implement CCM, as determined by the team. These obstacles are described in greater detail in the following section. They represent the targeted aspects of the model, which, due to their private-sector beginnings, would not fit into the custodial setting without modification. The re-adaptation of the model to fit the public sector, and more specifically the custodial environment within a public agency, was designed over several months, and its output was the subject of lively debate. The price for the program’s implementation failure was greater than the sum of its investment of time and resources. As many of the receiver-level clinical managers were brought into the receivership organization as employees of a new entity, results were expected. Because those expectations for results were high, the preparation for program implementation was carefully planned. It was understood that the receiver’s efforts were focused on remolding institutionalized patterns of action.

Initial efforts began with the breaking down of the six CCM elements into digestible tasks and deliverables within a project plan. A discussion then ensued concerning the parts of CCM that would not fit into the existing organization due to cultural barriers. Part of the debate mentioned earlier included the discussion among administrative staff with extensive CDCR experience, which provided insight about the barriers to a successful implementation in the custodial setting.A successful adoption of CCM is dependent on the visible support at all levels of the health care organization, starting with the senior managers. The federal receivership was established to provide the highest level of executive leadership support. The fiscal constraints of the state of California during the period of time when the program was implemented precluded the full adoption of CCM in the prison health care system. Clinical management that would otherwise have been dedicated to the coordinated care team was reduced. To increase managers’ visibility in relation to this program, attention was placed on coordination of care activities. This occurred at all levels, with headquarters-based administrative staff taking the lead in establishing the importance of the program by providing in-service trainings as well as on-site follow up support. In support of learning collaboratives, clinical administrators and supervisory staff were brought to headquarters facilities to participate in interactive sessions. It was felt that the overall effect of change in organizational behavior would occur once staff worked in collaboration to define new processes. To create visible leaders, managers had a role in shaping CCM implementation in a manner that was personally meaningful to them and would thus empower them. As the prison health care system is a single-payer, closed health care system, the potential to adopt evidence-based quality improvement strategies and practice guidelines is somewhat greater than would be the case in other health delivery settings. Because staff in a closed system is labor internal to the organization, the establishment of guidelines for these staff is an enabling factor for the full adoption of CCM policies with accountability for adherence to the model and results. The extent to which continuous, internally based labor learns and buys into the new policies and procedures equates to the extent to which sustainability of new methods can be achieved. In open health-delivery systems, clinical staff members are treated more as vendors than as internal staff. Because vendor relationships are managed differently than internal staff are, adherence to internal policies and procedures is more difficult to achieve. Some prisons institutionalized the use of temporary staff due to the relative ease with which these labor resources can be procured. Though temporary personnel cost was typically one and a half to two times the expense of a full-time employee, given the remote location of some facilities temporary staffing was preferred. This practice became institutionalized; as a supervising declared during interviews, “it was just the thing to do, because who has time to recruit and interview when using [temporary staff] was what everyone did.” She went on to note that “we certainly planned our staffing needs and secured the positions but look where we are . . . doctors can go [to the institution literally next door] and earn almost 25 percent more.

The connections between equilibrium outcomes in these various scenarios are established

In contrast, when the intensity of the contest among sites is beyond a certain threshold, competition tends to rapidly increase the fragmentation of published news: as the number of competing publishers increases, more and more a priori unlikely topics are reported resulting in a large diversity of published topics . This result is reminiscent of the emergence of “funny lists” and “heartwarming videos” in the news mentioned by the Financial Times . Our analysis extends to pure- and mixed-strategies and distinguishes between cases with small or large number of competing publishers.Next, in a model with firm asymmetries, we find that when some firms have better technology to forecast the popularity of topics, then, surprisingly, the overall diversity of news published by the remaining firms declines as these firms tend to take refuge in publishing ‘safer’ topics. When a subset of firms have extra revenue from a published ‘hit’ from loyal users then these ‘branded’ publishers tend to be conservative in their choice of topics as their loyal customer base represents ‘insurance’ against the contest. In contrast, the diversity of news published by unbranded outlets increases as unbranded publishers tend to avoid branded ones by putting more weight on a priori unlikely stories. These results are consistent with anecdotal evidence in the news industry where traditional news outlets are more conservative in their reporting whereas new entrants do not shy away from controversial stories. The findings are also conform to the broadly observed increase of diversity in the public agenda by communication theorists.In a final analysis,vertical planting tower we consider endogenous success probabilities. It is widely accepted that the media often ‘makes the news’ in the sense that a topic may become relevant simply because it got published. Interestingly, such a dynamic has an ambiguous effect on the diversity of published topics. If the contest is very strong then it results in a concentrated set of a priori likely topics. When the contest is moderate then the diversity of topics may be higher depending on the number of competing outlets.

The article is organized as follows. In the next section, we summarize the relevant literature. This is followed by the description of the basic model and its analysis where we first present a variety of results concerning symmetric competitors. Next, we extend the model to explore the impact of asymmetries across firms. Our last extension considers the case of endogenous success probabilities. The article ends with a discussion of the results and their applicability to other contexts. To facilitate reading, all proofs are relegated to the Appendix. The topic of this article is generally related to the literature on agenda setting for an excellent recent review that studies the role of media in focusing the public on certain topics instead of others. It is broadly believed that agenda setting has a greater influence on the public than published opinion whose explicit purpose is to influence the readers’ perspective. As the famous saying by Bernard Cohen goes: “The media may not be successful in telling people what to think but they are stunningly successful in telling their audiences what to think about”. The literature examines the mechanisms that lead to the emergence of topics and the diversity of topics across media outlets. In particular, McCombs and Zhu show that the general diversity of topics as well as their volatility has been steadily increasing over time. The general focus of our article is similar: we show that the nature of competition is an important mechanism affecting the diversity of public agenda. Agenda setting is also addressed in the literature studying the political economy of mass media for an excellent review.The standard theory states that media coverage is higher for topics that are of interest for larger groups, with larger advertising potential, and when the topic is journalistically more “newsworthy” and cheaper to distribute. Although there is little empirical evidence to support , the other hypotheses are generally supported and Snyder and Stromberg , among others. Hypotheses is particularly interesting from our standpoint. Eisensee and Stromberg show that the demand for topics can vary substantially over time. For example, sensational topics of general interest may crowd out other ‘important’ topics that would be covered otherwise. This supports the general notion that media needs to constantly forecast the likely success of topics and select among them accordingly.

Our main interest is different from this literature’s as we primarily focus on media competition as opposed to what causes variations in demand. Taking the demand as given, our goal is to understand how the competitive forces between media firms influences the selection and diversity of topics, which then has a major impact on the public agenda. As such, the article also relates to the literature on media competition where strategic behavior influences product variety. Early thoretical work by Steiner and Beebe on the “genre” selection of broadcasters explains cases of insufficient variety provision in an oligopoly. Interestingly, they show that although certain situations lead to the duplication of popular genres , other scenarios may lead to a “lowest common denominator” outcome where no consumer’s first choice of genre is ever served. A good discussion of these models and their extensions can be found in Anderson and Waldfogel . Our work is different from this literature in two important ways: we do not have consumer heterogeneity and, we do not rely on barriers to entry to explain limited variety. In fact, we study variety precisely when these factors’ importance is greatly diminished. On the empirical side, research on competition primarily focuses on how media concentration affects the diversity of news both in terms of the issues discussed in the media as well as the diversity of opinion on a particular issue. For example, George and Oberholzer-Gee show that in local broadcast news, “issue diversity” grows with increased competition even though political diversity tends to decrease. Franceschelli studies the impact of the Internet on news coverage, in particular the recent decrease in the lead-time for catching up with missed breaking news. He argues that missing the breaking news has less impact, as the news outlet can catch up with rivals in less time. This might lead to a free-riding effect among media outlets, where there is less incentive to identify the breaking news. Both of these articles have consistent empirical findings with our results/assumptions. In terms of the analytical model, we rely on the literature studying competitive contests among forecasters. For example, Ottaviani and Sorensen use a similar framework to model competition among financial analysts. Our model is different in that we explore in more detail the structure of the state space, we generalize the contest model by considering all possible prize-sharing structures and extend it in a variety of ways, most notably by analyzing asymmetries across players. This article studies competition among news providers who compete in a contest to publish on a relatively small number of topics from a large set when these topics’ prior success probabilities differ and when their success may be correlated.

We show that the competitive dynamic generated by a strong enough contest causes firms to publish ‘isolated’ topics with relatively small prior success probabilities. The stronger the competition , the more diverse the published news is likely to be. Applied to the context of today’s news markets characterized by increased competition between firms,vertical hydroponic farming new entrants and reduced customer loyalty, we expect a more diverse set of topics covered by the news industry. Although direct evidence is scarce, there seems to be strong empirical support for the general notion that the public agenda has become more diverse over time while also exhibiting more volatility McCombs . This general finding is consistent with our results. Although diversity of news may generally be considered a good thing, agenda setting, i.e. focusing the public on a few, worthy topics maybe impaired by increased competition. In a next step, we explore differences across news providers and find that branded outlets with a loyal customer base are likely to be conservative with their choice of reporting in the sense that they report news that is a priori agreed to be important. Facing new competitors with better forecasting ability also makes traditional media more conservative. In sum, if the public considers traditional media and not the new entrants as the key players in agenda setting, then increased competition may actually make for a more concentrated set of a priori important topics on the agenda. It is not clear however, that traditional news outlets can maintain forever their privileged status in this regard. Some new entrants have managed to build a relatively strong ‘voice’ over the last few years. We also explore what happens when the success of news is endogenous, i.e. if the act of publishing a topic ends up increasing its likely success. Interestingly, we find that an excessively strong contest tends to concentrate reporting on topics with the highest a priori success probabilities. We also find that the number of competitors has a somewhat ambiguous effect on the outcome. If there are too few or too many competing firms then, again agenda setting tends to remain conservative in the sense of focusing on the a priori likely topics. These results also resonate to anecdotal evidence concerning today’s industry dynamics. Our analysis did not consider social welfare. This is hard to do as it is not clear how one measures consumer surplus in the context of news. Indeed, the model is silent as to what is consumers’ utility when it comes to the diversity of news. Although policy makers generally consider the diversity of news as a desirable outcome, a view that often guides policy and regulatory choices, it is not entirely clear that, beyond a certain threshold, more diversity is always good for consumers. As mentioned in the introduction, the media does have an agenda setting role and it is hard to argue that every topic equally represented in the news is a useful agenda to coordinate collective social decisions . Nevertheless, our goal was to identify the competitive forces that may play a role in determining the diversity of news in today’s environment increasingly dominated by social media. Our analysis indicates that these forces do not necessarily have a straightforward impact on diversity. The generalized contest model presented has implications for other economic situations that may be well-described by contests. In this sense, our most relevant results are those that describe the outcome as a function of the reward-sharing patterns across winners. Indeed, we characterize all such patterns with a simple parameter, r, and show that depending on r there are only three qualitatively different outcomes leading to vastly different firm behaviors. Different r’s may characterise different contexts. For our case a finite, albeit varying r seemed appropriate and r = is less likely. In the case of a contest describing R&D competition, r = is quite plausible. Conversely, the case of r = 0 may well apply to contests among forecasters whose reward might be linked more closely to actually forecasting the event and less to how many other forecasters managed to do so. Our analysis of the case with a small number of firms may also be useful in particular situations ; we show that this case is tractable and shares many characteristics with the case involving many players. An important insight from our analysis is that contest models need to be carefully adjusted to the particular situations studied. Our framework can be extended in a number of directions. So far, we assumed a static model, one where repeated contests are entirely independent. One could also study the industry with repeated contests between media firms where an assumption is made on how success in a period may influence the reward or the predictive power of a medium in the next period. A similar, setup is studied with a Markovian model by Ofek and Sarvary to describeindustry evolution for the hi-tech product categories. Finally, our article generated a number of hypotheses that would be interesting to verify in future empirical research.

Three RCTs and two non-RCTs reported on culturally sensitive or culturally adapted interventions

All studies employed the same widely used and validated screening instrument, the Patient Health Questionnaire-9 , to determine baseline depression diagnosis. However, there was wide variability in the measures used to define study outcomes. To determine depressive symptom improvement, six of nine studies used the PHQ-9, two studies used the Hopkins Symptom Checklist Depression Scale , and one study used the Hamilton Rating Scale for Depression , the Clinical Global Impression Severity Scale , and the Clinical Global Impression Improvement Scale . One study reported that researchers used their own translation of the PHQ-9 to Chinese, which had been validated in a prior study.Other studies did not specify whether they used validated translations or translated their own instruments.All studies adequately described the interventions and the control conditions .Two studies reported post-intervention follow-up and included outcomes a year after the intervention had ended .Not all studies reported how frequently care managers contacted patients in the intervention group during follow-up .The mean age ranged from 34.8 to 57 years across studies, and 1166 of 4859 participants were male. Among the nine studies, 2679 participants had LEP. Most studies focused on Latino immigrants living in the United States , with Spanish as the preferred language3; only two studies included Chinese and Vietnamese immigrants. The majority of LEP participants spoke Spanish. One hundred and ninety-five patients with LEP spoke Mandarin, Cantonese, or Vietnamese . Two studies had poor characterization of participant languages, noting that many spoke BAsian languages,^ and citing only clinic language demographics . In two studies reporting that patients preferred a non-English language,vertical garden indoor the degree of English language proficiency was not described. Three-quarters of participants were recruited from general primary care and had a variety of medical conditions. Other participants were recruited into the studies for specific comorbidities .While intervention details were not always fully described, eight of nine studies employed bilingual care managers for the delivery of care in the collaborative care model.

The ninth study did not explicitly mention how the intervention was delivered to patients with LEP.No studies reported on the use of interpreters. These five studies explicitly tailored their interventions to different cultural groups. The two RCTs and one non-RCT serving Spanish-speaking patients, all conducted by the same research group, culturally tailored the collaborative care model by adapting the intervention materials for literacy and for idiomatic and cultural content. They further included cultural competency training for staff and employed bilingual staff to conduct the intervention.The remaining studies mentioned adding a cultural component to the collaborative care model with the goal of serving Asian immigrants with traditional beliefs about mental illness. One study further adapted the psychiatric assessment for cultural sensitivity.Four of five RCTs reported on change in depressive symptoms; none reported outcomes by preferred language group. Three RCTs reported that the proportion of patients who experienced a ≥ 50% reduction in depressive symptoms score was 13% to 25% greater in the intervention arm than in usual care .The last RCT, Yeung et al., reported no statistically significant difference between treatment groups at 6 months44; however, the investigators noted availability and high uptake of psychiatric services in both study arms . Three of these four RCTs included cultural tailoring of their interventions.Two RCTs reported on receipt of depression treatment and treatment preferences. In one RCT, 84% of patients treated in the collaborative care intervention received depression treatment , compared to only 33% of patients in the enhanced usual care arm, over 12 months of follow-up. Another RCT focused on depression treatment preferences.Using conjoint analysis preference surveys, this study found that patients preferred counseling or counseling plus medication over antidepressants alone, and that patients preferred treatment in primary care rather than in specialty mental health care. Patients in the collaborative care intervention group were much more likely to receive their preferred treatment at 16 weeks than were patients in usual care . However, this study also found that English speakers in both groups were more likely to receive their preferred treatment modality than their Spanish speaking counterparts .

One non-RCT study46 found that 49% and 48% of patients reported improved depressive symptoms at 6 and 12 months, respectively, among study participants treated with collaborative care. The two studies that reported outcomes by preferred language found significant differences between English- and Spanish-speaking patients. Bauer et al. found that Spanish language preference was associated with more rapid and greater overall improvement , when compared to English preference, despite not being associated with receipt of appropriate pharmacotherapy.Similarly, Sanchez et al. found that Spanish-speaking Hispanic patients had significantly greater odds of achieving clinically meaningful improvement in depressive symptoms at 3-month follow-up than did non-Hispanic whites .In contrast, Ratzliff et al. found similar treatment process and depression outcomes at 16 weeks among three groups treated with collaborative care: Asians treated at a culturally sensitive clinic, Asians treated at a general clinic, and whites treated at a general clinic .Furthermore, the study did not have a usual care control group to enable evaluation of the intervention.Despite the existence of effective treatment, depression care for patients with LEP is challenging for both patients and clinicians, and better models of care are needed. In a systematic review of the current literature on outpatient, primary care based collaborative care treatment of depression, we found that collaborative care delivered by bilingual providers was more effective than usual care in treating depressive symptoms among patients with LEP. The systematic review revealed important limitations in the current evidence base. The review was limited by the low number of studies , heterogeneity of study outcomes and definitions, and a lack of data on use of language access services. However, the randomized controlled studies were consistent in treatment effect size, as three of four high-quality RCTs found that 13%–25% more patients reported improved depressive symptoms when treated with collaborative care compared to usual care; the fourth had unusually high rates of treatment in the comparison arm and found no difference between groups.This is consistent with prior systematic reviews of collaborative care treatment.

Review of two cohort studies that reported outcomes by preferred language found similar-sized improvements as 10% and 27% more Spanish-speaking patients had improved depressive symptoms during 3 months of follow-up when treated with collaborative care, indicating that patients with LEP may benefit as much as, if not more than, English-speaking patients treated with collaborative care.In short, the collaborative care model—with its emphasis on regular screening, standardized metrics, validated instruments, proactive management, and individualized care, and when adapted for care of LEP patients with depression via the use of bilingual providers—appears to improve care for this patient population. Yet while the collaborative care model has performed well in research studies, many questions remain for wider implementation and dissemination in systems caring for patients with LEP. To help guide the dissemination of an effective model of collaborative care for patients with LEP, researchers will need to be more specific in detailing the language skills of participants and any cultural tailoring and adaptations made to the model to serve specific populations, as we found that race and ethnicity are often conflated with language in these studies,vertical garden indoor system and that preferred language and degree of English language proficiency is not always made explicit. Language barriers may increase the possibility of diagnostic assessment bias, diagnostic errors, and decreased engagement and retention in depression care.It is important to note that most studies employed bilingual staff; language concordance may be particularly important when dealing with mental health concerns, as it is associated with increased patient trust in providers, improved adherence to medications, and increased shared decision-making.Furthermore, the collaborative care model may have been addressing cultural barriers to care beyond linguistic barriers. While a few of the studies culturally adapted and modified their collaborative care model and their psychiatric assessments, these adaptations were not addressed in detail and may be difficult to replicate in other settings. Best practices for culturally adapting collaborative care for patients with LEP have yet to be defined. Further research is also needed to more rigorously ascertain the effectiveness of cultural versus linguistic tailoring on the effectiveness of collaborative care in LEP groups. Additionally, given the evidence that depression in racial and ethnic minorities and patients with LEP often goes unrecognized,efforts will be needed to make sure these groups are systematically screened for depressive symptoms and referred for care in culturally sensitive ways. One large implementation study in the state of Minnesota found a marked difference in enrollment into collaborative care by LEP status.Of those eligible for a non-research-oriented collaborative care model, only 18.2% of eligible LEP patients were enrolled over a 3-year period, compared to 47.2% of eligible English-speaking patients . Similarly, Asian patients were underrepresented in studies and likely in collaborative care programs. Yeung et al. reported that the majority of Chinese immigrants with depression were under-recognized and under treated in primary care, as evidenced by the fact that only 7% of patients who screened positive for depression were engaged in treatment in primary care clinics in Massachusetts.Referral processes for collaborative care may also need to be improved for patients with LEP.

The reasons for differences in enrollment by LEP status in collaborative care programs remain poorly elucidated and likely include patient-, provider-, and systems-based factors. However, these results suggest that without targeted efforts to screen, enroll, and engage patients with LEP, collaborative care models may only widen mental health disparities for such patients. Studies that examine implementation and sustainability of the collaborative care model are needed. This review has a number of limitations. We may have missed studies where language and participant origin were not adequately described. Additionally, as has been noted in prior systematic reviews of RCTs of collaborative care, participant and provider blinding would not have been feasible, due to the nature of the interventions.Other limitations include the variability in study duration and outcome assessment, making direct outcome comparison difficult. Finally, of the nine studies included in this review, five were conducted in Los Angeles, CA . This may limit the generalizability of our results.Circadian rhythms arise from genetically encoded molecular clocks that originate at the cellular level and operate with an intrinsic period of about a day . The timekeeping encoded by these self-sustained biological clocks persists in constant darkness but responds acutely to changes in daily environmental cues, like light, to keep internal clocks aligned with the external environment. Therefore, circadian rhythms are used to help organisms predict changes in their environment and temporally program regular changes in their behavior and physiology. The circadian clock in mammals is driven by several interlocked transcription-translation feedback loops. The integration of these interlocked loops is a complicated process that is orchestrated by a core feedback loop in which the heterodimeric transcription factor complex, CLOCK:BMAL1, promotes the transcription of its own repressors, Cryptochrome and Period as well as other clock-controlled genes . Notably, there is some redundancy in this system as paralogs of both PER and CRY proteins participate in the core TTFL. In general, these proteins accumulate in the cytoplasm, interact with one another, and recruit a kinase that is essential for the clock, Casein Kinase 1 δ/ε , eventually making their way into the nucleus as a large complex to repress CLOCK:BMAL1 transcriptional activity. Despite this relatively simple model for the core circadian feedback loop, there is growing evidence that different repressor complexes that exist throughout the evening may regulate CLOCK:BMAL1 in distinct ways. PER proteins are essential for the nucleation of large protein complexes that form early in the repressive phase by acting as stoichiometrically-limiting factors that are temporally regulated through oscillations in expression. As a consequence, circadian rhythms can be disrupted by constitutively over expressing PER proteins or established de novo with tunable periods through inducible regulation of PER oscillations. CK1δ/ε regulate PER abundance by controlling its degradation post-translationally; accordingly, mutations in the kinases or their phosphorylation sites on PER2 can induce large changes in circadian period, firmly establishing this regulatory mechanism as a central regulator of the mammalian circadian clock. CRY proteins bind directly to CLOCK:BMAL1 and mediate the interaction of PER-CK1δ/ε complexes with CLOCK:BMAL1 leading to phosphorylation of the transcription factor and its release from DNA as well as acting as direct repressors of CLOCK:BMAL1 activity by sequestering the transcriptional activation domain of BMAL1 from coactivators like CBP/p300.

The chirality reversal field can be almost halved when a short-pulsed field is applied

The analysis indicates that such a large reservoir acts as a potential evaporating surface that decreases the local surface temperature, and cools the entire atmospheric column, decreasing upward motion, resulting in sinking air. This sinking air mass causes low level moisture divergence, decreases cloudiness, and increases net downward radiation, which tends to increase the surface temperature. However, the evaporative cooling dominates radiative heating, and resulting in a net decrease in surface and 2 m air temperature. The strong evaporation pumps moisture into the atmosphere, which suggests an increase in precipitation, but the moisture divergence moves this away from the TGD region with no net change in precipitation. The two processes, increased latent heating with surface cooling, and decreased cloudiness with increased downward solar radiation, are opposing feed backs that are dominated here by the area-mean surface cooling effect. It is not clear if this holds true for other times of the year when the mean Tmax is lower and cloudiness may be higher. Furthermore, the impacts on the local monsoon flow, precipitation intensity, and frequency, have not been studied in this initial investigation. However, these relative changes are significant and will likely have an impact on local ecosystems, agriculture, energy, and the population. Simulations at 10km are not sufficiently fine enough to determine the full extent of this sensitivity and, hence, 1 km multi-year simulations will be needed. Amagnetic vortex state1,2 is a ground state of a magnetic nanostructure that consists of a perpendicularly magnetized core and in-plane curling magnetizations around the core . Because of its importance in fundamental physics, research on the vortex state is an important emerging topic in magnetism studies and it has a high potential for application in high-density data storage devices. A magnetic vortex state is energetically fourfold degenerated, which is determined by its polarity and chirality,vertical grow where the polarity refers to the perpendicular direction of the core magnetization, pcore and the chirality,c, refers to the curling direction of the in-plane magnetization .

Obviously, the success of a magnetic vortex device will critically depend on the question of how to control the vortex polarity and chirality effectively. Much effort has been invested recently in developing various methods for reversing the vortex polarity and chirality with a low magnetic field. While the chirality can be reversed easily with a weak field of ,50 mT , the magnetic field required to reverse the vortex core is on the order of 500 mT, which is too large for practical use in device applications. To reduce the vortex core-reversal field, an alternative approach used a dynamic field. A promising result is also reported for an AC oscillating magnetic field set at the vortex resonance frequency, so that the vortex excitation could assist its polarity reversal. A representative example of such an approach is the vortex gyration excitation, in which the vortex core exhibits a spiral motion as an AC magnetic field is tuned on at the gyration eigen frequency. Core switching occurs subsequently through vortex–antivortex creation and annihilation6 as the core’s moving speed exceeds a critical value. The core reversal field can be reduced in such a manner to values far below 10 mT . However, this method contains a fundamental problem for applications. After the core reversal and turning off the field, the core gyration exponentially decays to its initial position. The decay radius is comparable to the lateral size of the sample and the relaxation takes a few hundred nanoseconds. This is a severe obstacle to reading the polarity. Recently, Wang and Dong and Yoo et al. found a new method of vortex core flip from numerical simulation. They demonstrated that the vortex core polarity could be switched in a radial excitation mode by a perpendicular AC magnetic field. In contrast to the gyration mode-assisted switching, which involves the vortex core motion, the radial mode-assisted core switching involves only axial symmetric oscillations, thus preserving the vortex core position. Obviously, the radial mode-assisted core switching has a completely different mechanism from the gyration mode-assisted core switching. The underlying mechanism of the radial mode-assisted core switching was not clearly shown by the simulation.

The critical field obtained by the radial mode in these studies is of the order of 20 mT , larger than the gyration mode-assisted core reversal. In this work, we studied the underlying mechanism of the radial mode oscillation and outlined a new pathway to reduce the core switching field further down to the mT range, which was more comparable to the critical field of the gyration-assisted core switching. In addition to micro-magnetic simulations, we also established a dynamical equation for the radial mode oscillation from the Landau–Lifshitz–Gilbert equation. This equation clearly explores the nonlinear behavior of the radial mode and the critical field reduction. For direct comparison of the critical field reduction, the simulation structure was set as described by Yoo et al.. According to previous studies, the radial modes are classified by the node number n . The first mode has one node, the vortex core, which means that the magnetization does not oscillate temporally at the vortex core, but the other parts almost uniformly oscillate. The second mode has two nodes; one is the vortex core and the other a concentric circle. Yoo et al. studied the resonance frequency of the individual radial mode and obtained the eigen frequencies with the same sample structure as in this study: 10.7 GHz for the first mode , 15.2 GHz for the second mode , and 20.7 GHz for the third mode . They also showed the vortex core polarity reversal using the first mode with an oscillating external field of 20 mT. To reduce the radial mode-induced critical field below 10 mT, we stimulated the first mode of the radial oscillation with a different method; that is, sweeping of the external field frequency. The field was sinusoidal with amplitude of 9 mT and the field frequency f was slowly varied from 14.0 to 6.0 GHz over 40 ns. Figure 1b shows the magnetization oscillation during frequency sweeping with time. The normalized magnetization along the thickness direction mz and the external magnetic field, Hz, were plotted together. The term ,mz. means the spatial average over the entire disk. The magnetization oscillation has the same frequency despite the phase difference. From this oscillation, we can get the oscillation amplitude of magnetization, Iz, in the thickness direction, which is half the difference between the nearest maximum and minimum values of the ,mz. oscillation.

After reaching an external field frequency of 6.0 GHz, the frequency sweeping direction was reversed and f returned to 14.0 GHz. In Fig. 1c, Iz is shown as a function of f. It is interesting to note that an external field of 9 mT can reverse the vortex core polarity. In downward sweeping of the frequency,farming vertical the almost uniform magnetization oscillation was observed on the disk except for the core conserving its width . This uniform oscillation was maintained before Iz reached the maximum amplitude of 0.28 when f was 8.7 GHz. After reaching this critical amplitude, the uniform oscillation collapsed and converged into the disk center that generated a breathing motion of the core. Such a breathing generated a strong exchange field when the core was compressed, and then core polarization switching occurred. Amplitude fluctuations near 8.5 GHz and 10.5 GHz are transition effects discussed below. In contrast to downward sweeping, the upward frequency sweeping did not reach the amplitude of 0.28, so the vortex maintained its polarity. This means that one cycle of frequency sweeping generated one core reversal. It is notable that the amplitude obtained with the fixed-field frequencies was the same as the upward sweeping. The fixed-field frequency amplitudes were determined by amplitude saturation after turning on the external oscillating field. To reverse the core polarity with the upward sweeping oscillation and fixed frequency oscillation, a larger field was required for achieving the sufficient oscillation amplitude. From this sweeping frequency simulation, it was verified that the critical field was reduced to below 10 mT and this reduction was only observed in downward sweeping because of the hysteresis behavior of the frequency.We tested the scalability of the radial mode-induced core reversal. When the radius of the disk was 120 nm, the critical field obtained by the frequency sweeping method was 9.3 mT. The core of a disk with radius 250 nm reverses its polarity with 12 mT external field. Increasing the radius, the critical field also increases. This scalability is an important property for developing data storage devices. Contrary to the radial mode-induced polarity switching, the critical field with the gyration-induced polarity switching exhibits inverse radius dependence19 as well as the chirality reversal13. Finally, we point out the chaotic behavior and the phase commensurability in the radial mode oscillation for further studies. PetitWatelot et al. observed the chaos and phase-locking phenomenon in the vortex gyration with the core reversal31. We observed similar behavior in radial mode oscillation. It is expected that a nonlinear oscillator with a sufficiently large driving force would exhibit chaotic motion. We confirmed this chaotic behavior in the radial mode of the vortex. When the oscillating field strength was smaller than Hc, a plot of the variable with respect to its time derivative, for example v _mzw versus ,mz., showed a circular trajectory. But when the field was larger than Hc, this plot becomes complex in the phase space, which manifests its chaotic behavior. Figure 5 shows examples of the chaos in the radial mode. The frequency was fixed at 13.5 GHz. When H 5 60 mT , Hc , it showed a closed circular trajectory, but when H 5 90 mT . Hc the trajectory was not closed . Further increases in the field resulted in closed trajectories. However, the trajectories were not a simple circle. To close the trajectory, 14 cycles of field oscillation are needed and during these 14 cycles, the core reversed four times. In the case of H 5 120 mT, core reversal occurred twice in five field oscillations , implying that the core reversal rate was related to the chaotic behavior.

Thus, to describe the radial mode of vortex including its chaotic behavior, the core polarity-related term32 is needed in the motion equation. In summary, we studied the nonlinear resonance of the radial mode of the vortex and found that the oscillation mode corresponding to the Duffing-type nonlinear oscillator exhibited a hysteresis behavior with respect to the external field frequency. Through the hysteresis effect, we can achieve hidden amplitude that is almost double that obtained with fixed field frequency and this amplitude multiplication effect reduces the critical field below 10 mT. In addition, we pointed out the chaotic behavior of the radial mode for further studies. We think that to complete the study on vortex dynamics, it is timely to start research on the nonlinear behavior in radial modes, as well as in other oscillations of the magnetic vortex.Targeted protein degradation has emerged over the last two decades as a promising therapeutic strategy with advantages over conventional inhibition.Unlike inhibitors, which operate through occupancy-driven pharmacology, degraders can enable catalytic and durable knockdown of protein levels using event-driven pharmacology. Most degrader technologies, such as proteolysis targeting chimeras and immunomodulatory imide drugs, co-opt the ubiquitin proteasome system to degrade traditionally challenging proteins. Intracellular small molecule degraders have demonstrated success in targeting over 60 proteins and several are currently being tried in the clinic.However, due to their intracellular mechanism of action, these approaches are limited to targeting proteins with ligandable cytosolic domains. To expand targeted degradation to the cell surface and extracellular proteome, two recent lysosomal degradation platforms have been developed. One, lysosome targeting chimeras , utilizes IgG-glycan bioconjugates to co-opt lysosome shuttling receptors.LYTAC production requires complex chemical synthesis and in vitro bioconjugation of large glycans which are preferentially cleared in the liver, limiting the applicability of this platform. A second extracellular degradation platform, called antibody-based PROTACs , utilizes bispecific IgGs to hijack cell surface E3 ligases.Due to the dependence on intracellular ubiquitin transfer, AbTACs are limited to targeting cell surface proteins, leaving the secreted proteome undruggable. Thus, there remains a critical need to develop additional degradation technologies for extracellular proteins. Here, we have developed a novel targeted degradation platform, termed cytokine receptor targeting chimeras .

The debate on the scale range of the COI demonstrates how vague and imprecise the concept really is

Winburn and Wagner acknowledged that COIs can be equated with counties but also, and potentially even more significantly, with cities and neighborhoods . Lastly, Stephanopoulos added that “communities exist, and should be represented in the legislature, at different levels of generality,” and that more specific communities can form smaller-scale districts and broader ones can be captured by larger-scale districts like the congressional type . Thus this camp answers that the COI can take a wide range of scales. The opposing camp, however, has doubted that COIs can exist at certain scales. Chambers and Monmonier were skeptical that they hold at the smaller scales, suggesting that they are larger than neighborhoods. Chambers believed that such communities have to be large in order to command a majority in a district, but he was focusing on those relevant to the congressional type, which are almost always far larger than neighborhoods . Monmonier based his case on the improved transport and communication links that have allowed communities to form that are more fragmented and extend beyond one’s residential proximity . Gardner had trouble with the idea that there could be COIs at the larger scales, musing that a congressional district of half a million or more people could hardly be deemed a single, coherent community. May and Moncrief , in their commentary on districts in the Western United States, similarly questioned whether a meaningful COI could be tied to one of the sprawling districts in rural desert environments , though Steen suggested that the fact that such districts are so rural is enough to distinguish them as salient communities . In sum, this camp retorts that the COI exists only at a narrow range of scales, and cannot be applied at the largest and smallest ends of the scale spectrum. The frequent references to the neighborhood in this literature on COIs raise the question of how related the two concepts are. These appear to be similar or at least related concepts, especially when one is focusing on the cognitive COI. But this relationship only seems to apply at a particular scale of COI; a large-scale COI made up of multiple counties is obviously not comparable to a neighborhood. Of course, one must first define what exactly a neighborhood is,vertical gardening in greenhouse which is itself an interesting and rich topic that has been approached in various ways. Scholars have given definitions deriving from more socioeconomic or demographic approaches to more cognitive ones .

The latter study adopted a cognitive approach by asking residents to indicate where they believe the boundaries of the Koreatown neighborhood to be. If one can define and identify a certain neighborhood as a region, either thematic or cognitive, one can then determine how well it corresponds to a particular scale of COI, whether the two greatly overlap or are even identical. COIs may well exist at different scales, but they are different varieties of COI, with different meanings for residents. One can discover the nature of each scale of COI by recognizing it as a cognitive region. Conceptualizing COIs as cognitive regions offers the greatest potential to discover their meaningful extents, precisely because meaning is a cognitive construct. In this research, I pursue this by soliciting people’s beliefs about the extent of their COI, giving them the freedom to make it as big or small as they choose. Such a survey can reveal the scales people most commonly use to think of COIs, thereby identifying as precisely as possible a range of scales for these cognitive regions. One can also conceive of a scale of “sense of place” by which people have different levels or types of place attachment at different scales. For example, an individual might identify very strongly with his or her city, but feel little connection to one’s county. Similarly, some people might identify more with their state than their country, while others might feel the opposite. One can even possess a strong “sense of place” at multiple scales simultaneously. Shamai demonstrated this in a study with Canadian students, finding that they held “nested allegiances” for three different levels of place: country, province, and metropolitan area. However, these students did not feel an equal degree of attachment toward each of these three scales. Rather, they felt a stronger sense of place toward their metropolitan area, followed by their country, and lastly their province. These findings have implications for COI research, because if people can identify with multiple levels of place simultaneously, they can certainly identify with multiple COIs while feeling different levels of attachment toward each. In addition to the COI criterion, the need to respect the boundaries of already existing administrative regions has long been recognized as an important objective for good redistricting . The requirement is currently used in places ranging from Japan to the United Kingdom to California .

While respecting clearly-bounded administrative regions is easier to interpret than respecting the more vaguely-bounded COIs, the two criteria may in fact be closely related. Counties and cities are often considered to be “vital, legal, and familiar communities of interest” . The residents of such jurisdictions “share a history and collective sense of identity” that help foster a genuine sense of community . Gardner contended that genuine communities arise where relevant ties form, but those bonds last only in jurisdictions with fixed boundaries. He argued furthermore that “common residency in a working, functioning, self-governing locality by itself can give rise to a political and administrative community of interest entitled to recognition. As the Colorado Supreme Court recently observed, ‘counties and the cities within their boundaries are already established as communities of interest in their own right, with a functioning legal and physical local government identity on behalf of citizens that is ongoing’” . Winburn and Wagner likewise identified counties as important COIs in the redistricting context, in large part because they play such a critical role in the electoral process, from registering voters to mailing election information to administering polling places . Bowen made a similar case with cities, as “residents of the same city share much in common—the same taxation levels, the same public problems, and the same municipal government” . These findings suggest that administrative regions may well contribute to the emergence of COIs as cognitive regions, and that the boundaries of the former may also serve as the boundaries of the latter. However, some scholars have cautioned against completely equating administrative regions with COIs. Winburn and Wagner recognized that “counties are [not] the only, or even always the most relevant, political community of interest for a citizen” . Stephanopoulos argued that the two are often different, as when interests and affiliations do not follow administrative boundaries, or when administrative regions contain multiple communities or only parts of communities. He did concede, however, that “the two may sometimes be functionally identical, both because [administrative regions] tend to be inhabited by people with similar socioeconomic characteristics, and because civic ties can foster a sense of kinship” . The consensus appears to be that administrative regions are at the very least useful proxies for COIs, if not in some sense meaningful communities themselves. Whether this is more the case for counties or cities likely depends on locational context; counties are probably more meaningful entities in rural areas than in urban areas.

My dissertation seeks to investigate the effect of both scale and administrative regions on people’s conceptions of their COI. I do so by conducting two studies. The first study seeks to determine the effects of three factors on the cognitive COIs that survey respondents depict. Those factors are the extent of the map given to survey respondents, whether the boundaries of administrative regions are shown to them on the map, and whether they live in an urban or rural locale. This study is an experimental survey of residents of an urban study area and a rural study area, greenhouse vertical farming with the manipulated variable being the type of map that residents receive. There are six types of map, because there are three possible map extents with versions that have and lack boundaries. Participants of this first study respond by drawing freehand on the map three different areas representing their COI, one being the area that is definitely within their COI, another being the area that is probably within their COI, and the last one being the area that is possibly within their COI. Requiring a series of drawings enables me to achieve a secondary aim of this study—examining variation within respondents’ cognitive COIs by having them depict different levels of confidence, in the same vein as Montello et al. . Another secondary aim is to explore how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. The second study seeks to determine the extent of the cognitive COIs that survey respondents depict, when given free rein to make their region as large or small as they want. Participants respond to this second study by ranking predefined administrative regions on the map according to how confident they are that a given area is within their COI. They do so at three different map scales—one showing large-sized areas , one showing medium-sized areas , and one showing small-sized areas . Respondents also indicate how much they identify with the COI they define at each scale, on a five-point rating scale. This enables me to achieve a secondary aim of this study— investigating whether respondents identify with multiple nested COIs at different scales, and if they do, which ones they identify with the most. Like the first study, my second study achieves the additional secondary aim of exploring how the cognitive COIs that respondents depict coincide with the existing electoral districts, as a function of scale. Both studies together allow me to determine whether COIs exist as cognitive regions at multiple scales. If they do, then I can describe the nature of these regions at those different scales, particularly whether they reflect local districts, counties, and cities.Focal therapy has the potential to improve management of prostate cancer , by reducing side effects associated with radical treatment. While the safety and feasibility of FT strategies have been reported using cryoablation,focal laser ablation ,and high intensity focused ultrasound ,long-term oncologic efficacy is unknown. A critical barrier to robust testing of FT strategies is appropriate patient selection criteria, which are not clearly established.A recent FDA-AUA-SUO workshop on partial gland ablation highlighted this challenge, noting that “some [authors] regard [partial gland ablation] as an alternative to AS for low-risk cancers, whereas others view it as an alternative to radical therapy for selected, higher risk cancers.”Regardless of approach, there is broad agreement on the importance of assessment for FT using multi-parametric MRI followed by targeted biopsy.To clarify the impact of different patient selection criteria on FT eligibility, we retrospectively studied men who have received MRI/Ultrasound fusion biopsy, incorporating both targeted and template biopsies. To confirm biopsy findings and to derive the accuracy of fusion biopsy in FT eligibility, we examined whole-organ concordance of eligibility assessment in a subset of patients who underwent radical prostatectomy.All men undergoing MRI/US fusion biopsy at UCLA between January 2010 and January 2016 were retrospectively screened for a suspicious lesion identified on mpMRI , which was found to contain CaP upon targeted biopsy . FT eligibility criteria, based on the NCCN intermediate-risk definition8 and recent consensus guidelines,were applied . Figure 2 shows histological profiles for FT eligible patients based on biopsy. Three different patterns of CaP are shown, each suitable for treatment by hemi-gland ablation or less. Men with bio-psynegative ROIs were considered ineligible for FT. Similarly, men without csCaP < 4mm were also considered ineligible , regardless of the number of positive cores. All collection of clinical data was performed prospectively within a UCLA IRB-approved registry. The fusion biopsy method, which has been previously described, was unchanged throughout the study period.Briefly, within 2 months of biopsy, patients underwent a 3T mpMRI with body coil. MRI interpretation was conducted under the direction of a dedicated uroradiologist , and suspicious lesions were assessed according to UCLA and Prostate Imaging-Reporting and Data System criteria.MRI assessment was based onthe UCLA assessment system,which pre-dates PI-RADS v1, and after PI-RADS v2 was established, by both systems using highest suspicion category found. At biopsy, images were registered and fused with real-time transrectal ultrasound to generate a 3D image of the prostate with delineated ROIs.

These diverse priorities will place important constraints on animal agriculture in the coming decades

Although the detailed reaction mechanism has not yet been identified, discovery of this distinct function of a methane-producing PLP-dependent enzyme could presage a breakthrough in the practical application of methanotrophs. Diversifying genetic regulatory modules can allow delicate control of synthetic pathways that are activated on demand according to host plant physiology. Fascinating potential targets for dynamic regulation are small molecules involved in plant–microbe interactions and plant stress response. Ryu et al. recently constructed biosensors for natural and non-natural signaling molecules that enabled control of N fixation in various microbes. More recently, Herud-Sikimić et al. engineered an E. coli Trp repressor to a FRET-based auxin biosensor that undergoes conformational change in the presence of auxin-related molecules but not L-tryptophan Because the conformational change induced by L-tryptophan is a core function in the Trp operon, the engineered Trp repressor may allow auxin-dependent biosynthesis. Developing dynamic regulatory circuits for controlling expression of PGP traits may help maintain the viability of engineered host microbes in pre-existing microbiomes and thereby facilitate their potential contributions to sustainable agriculture. In nature, plants interact with multiple PGPRs whose properties may work cooperatively to provide benefits. For example, Kumar et al. observed synergistic effects of ACC deaminase- and siderophore-producing PGPRs that enhanced sunflower growth. This result implies that layering PGP traits in a host strain under single or multiple regulatory circuits may maximize their advantages. Furthermore, microbiome engineering inspired by native PGPR colonization, for example,through siderophore-utilizing ability,dutch bucket for sale may open a new era for sustainable agriculture via customized PGPR consortia. Agricultural science has been enormously successful in providing an inexpensive supply of high-quality and safe foods to developed and developing nations. These advancements have largely come from the implementation of technologies that focus on efficient production and distribution systems as well as selective breeding and genetic improvement of cultured plants and animals.

Although population growth in developed nations has reached a plateau, no slowdown is predicted in the developing world until about 2050, when the population of the world is expected to reach 9 billion . To meet the global food demand will require nearly double the current agricultural output, and 70% of that increased output must come from existing or new technologies . The global demand for animal products is also substantially growing, driven by a combination of population growth, urbanization, and rising incomes. However, at present, nearly 1 billion people are malnourished . Animal products contain concentrated sources of protein, which have AA compositions that complement those of cereal and other vegetable proteins, and contribute calcium, iron, zinc, and several B group vitamins. In developing countries where diets are based on cereals or bulky root crops, eggs, meat, and milk are critical for supplying energy in the form of fats. In addition, animal-derived foods contain compounds that actively promote long-term health, including bio-active compounds such as taurine, l-carnitine, creatine, and endogenous antioxidants such as carosine and anserine . Furthermore, those foods are a rich source of CLA, forms of which have anti-cancer properties , reduce the risk of cardiovascular disease , and help fight inflammation .Animal production will play a pivotal role in meeting the growing need for high-quality protein that will advance human health. Our technological prowess will be put to the test as we respond to a changing world and increasingly diverse stakeholders. Intensifying food production likely will be confounded by declining feed stock yields due to global climate change, natural resource depletion, and an increasing demand for limited water and land resources . Additionally, whereas the moral imperative to feed the malnourished people of the world is unequivocal, a well-fed, well-educated, and vocal citizenry in developed nations places a much greater emphasis on the environmental sustainability of production, the safety of food products, and animal welfare, often without regard for impact on the cost of the food. Despite these daunting challenges, the sheer magnitude of potential human suffering calls on us to assume the reins from our recently lost colleague, Norman Borlaug, to harness technological innovation within our disciplines to keep world poverty, hunger, and malnutrition at bay.

As was the case during the Green Revolution, advancements in genetics and breeding will provide a wellspring for a needed revolution in animal agriculture. Indeed, we have entered the era of the genome for most agricultural animal species. Genetic blueprints position us to refine our grasp of the relationships between genotype and phenotype and to understand the function of genes and their networks in regulating animal physiology. The tools are in hand for accelerating the improvement of agricultural animals to meet the demands of sustainability, increased productivity, and enhancement of animal welfare .The goals of animal genetic improvement are firmly grounded in the paradigm of animal production, which naturally refers to concepts of efficiency, productivity, and quality. Sustainability and animal welfare are central considerations in this paradigm; an inescapable principle is that the maximization of productivity cannot be accomplished without minimizing the levels of animal stress. Furthermore, the definition of efficiency requires sustainability. Unnecessary compromises to animal well-being or sustainability are morally reprehensible and economically detrimental to consumers and producers alike. The vast majority of outcomes from genetic selection have been beneficial for animal well-being. Geneticists try to balance the enrichment of desirable alleles with the need to maintain diversity because they are keenly aware of the vulnerability of monoculture to disease. Genetic improvement programs must always conserve genetic diversity for future challenges, both as archived germplasm and as live animals . However, unanticipated phenotypes occasionally arise from genetic selection for 2 reasons. First, every individual carries deleterious alleles that are masked in the heterozygous state but can be uncovered by selective breeding. Second, the linear organization of chromosomes leads to certain genes being closely linked to each other on the DNA molecules that are transmitted between generations. Thus, blind selection for an allele that is beneficial to 1 trait also enriches for all alleles that are closely linked to it and either through pleiotropy or linkage disequilibrium, undesirable correlated responses in other traits may occur.

Geneticists are aware of this and closely monitor the health and well-being of populations that are under selection to ensure that any decrease in fitness is detected and that ameliorative actions are taken to correct problems either by the elimination of carriers from production populations, altering the selection objective to facilitate improvement in the affected fitness traits, or by introducing beneficial alleles by crossbreeding. Increasingly precise molecular tools now allow the rapid identification of genetic variants that cause single-gene defects and facilitate the development of DNA diagnostics to serve in genetic management plans that advance the production of healthy animals. Whole-genome genotyping with high-density, SNP assays will enable the rapid determination of the overall utility of parental lines in a manner that is easily incorporated into traditional quantitative genetic improvement programs . The approach is known as genomic selection and essentially allows an estimation of the genetic merit of an individual by adding together the positive or negative contributions of alleles across the genome that are responsible for the genetic influence on the trait of interest. Under GS,hydroponic net pots genetic improvement can be accelerated by reducing the need for performance testing and by permitting an estimation of the genetic merit of animals outside currently used pedigrees. Genomic selection also provides for development of genetic diagnostics using experimental populations, which may then be translated to commercial populations, allowing, for the first time, the opportunity to select for traits such as disease resistance and feed efficiency in extensively managed species such as cattle. The presence of genotype × environment interactions will also require the development of experimental populations replicated across differing environmental conditions to enable global translation of GS. The speed with which the performance of animals can be improved by GS is determined by generation interval, litter, or family size, the frequency of desirable alleles in a population , and the proximity on chromosomes of good and bad alleles. Although predicting genetic merit using DNA diagnostics may be less precise than directly testing the performance of every animal or their offspring, the reduction in generation interval by far offsets this. For example, in dairy populations, the rate of genetic improvement is expected to double with the application of GS . Preliminary results from the poultry industry suggest that GS focused on leg health in broilers and livability in layers can rapidly and effectively improve animal welfare . Although price constraints currently limit the widespread adoption of high-density SNP genotyping assays in livestock species, low-cost, reduced-subset assays containing the most predictive 384 to 3,000 SNP are under development in sheep, beef, and dairy cattle.

These low-cost assays are expected to be rapidly adopted and will be expanded in content as the price of genotyping declines. Animal selection based on GS is also expected to reduce the loss of genetic diversity that occurs in traditional pedigree-based breeding because the ability to obtain estimates of genetic merit directly from genotypes avoids the restriction of selection to the currently used parental lineages. Also, despite the increase in the rate of genetic improvement, selection for complex traits involving hundreds or thousands of genes will not result in the rapid fixation of desirable alleles at all of the underlying loci.Whereas GS will accelerate animal improvement in the post genomic era, parallel and overlapping efforts in animal improvement based on genome-informed genetic engineering must ensue to ensure that productivity increases at pace with the expanding world populations. The tools of functional genomics and the availability of genome sequences provide detailed information that can be used to engineer precise changes in traits, as well as monitor any adverse effects of such changes on the animal . These tools are also enabling a deeper understanding of gene function and the integration of gene networks into our understanding animal physiology . This understanding has begun to identify major effect genes and critical nodes in genetic networks as potential targets for GE.The genomics revolution has been accompanied by a renaissance in GE technologies. Novel genes can be introduced into a genome , and existing genes can either be inactivated or their expression tuned to desirable levels using recently developed RNA interference . The specificity and efficiency of these approaches is expected to continue to improve. The technical advancements in GE are so significant that Greger advocated that scrutiny of the procedures for generating transgenic farm animals is undeserved and that discussion should focus on the welfare implications of the desired outcome instead of unintended consequences of GE. This position is also reflected by the rigorous regulatory mechanism established by the FDA for premarket approval of GE animals , which considers the risks of a given product to the environment and the potential impact on the well-being of animals and consumers. Indeed, this review mechanism was recently adopted as an international guideline by Codex Alimentarious , which has already found GE to be a safe and reliable approach to the genetic improvement of food animals . In addition, guidelines that promote good animal welfare, enhance credibility, and comply with current regulatory requirements, for the development and use of GE animals have been developed as a stewardship guidance . The stewardship guidance assists the industry and academia in developing and adopting stewardship principles for conducting research and developing and commercializing safe and efficacious agricultural and biomedical products from GE animals for societal benefit.Both GS and GE are viable, long-term approaches to genetic improvement, but when should one approach be employed over the other? Genes are not all equal in their effects upon changes in phenotype. The products encoded by some genes have major effects on biochemical pathways that define important characteristics or reactions in an organism. Other genes have lesser, but sometimes still important, effects. In general, genetic modification by GE is used to add major-effect genes, whereas genetic selection is applied to all genes, including the far larger number of lesser-effect genes that appear to be responsible for about 70% of the genetic variation within a given trait . One of the most significant advantages of GE is the ability to introduce new alleles that do not currently exist within a population, in particular, where the allele substitution effect would be very large. This approach can include gene supplementation and genome editing, the latter enabling the precise transfer of an alternative allele without any other changes to the genome of an animal .

The most common application of forward osmosis treatment methods is seawater desalination

The forward osmosis desalination process usually includes osmotic dilution of draw solution and freshwater production from diluted draw solution. There are two types of forward osmosis desalination based on the different water production methods. One applies heat sinking draw solution that broke down into volatile gases , these gases could also be recycled during the thermal decomposition and generate high osmotic pressure. The other is used as filtration or dilution of water. For instance, the combination of reverse osmosis and forward osmosis could be used for drinking water treatment or brine removal, forward osmosis could also be a fully or partly replacement of ultrafiltration under certain circumstances. Recent studies in materials science also proved that forward osmosis could be used to control drug release in the human body, it could also control the food concertation in the production phase. Regarding the semi-permeable membrane used in Forward osmosis, the tubular membrane is more functional for many reasons. The tubular membrane is one of the membranes that allow solution flows bidirectionally of the membrane, it maintains high hydraulic pressure without deformation due to the self-supported feature, it is also easier to fabricate while retaining high flexibility and density. Although there is a substantial amount of energy required to treat seawater using Forward Osmosis technology, its potential has been demonstrated through bench-scale experiments,fodder system indicating further investigations are needed to evaluate its commercial application. Seawater desalination has provided freshwater for over 6% of the world’s population. One of the commonplace models of forward osmosis seawater treatment is using a hollow fiber membrane. The key parameter in the hollow fiber membrane model is the minimum draw solution flow rate.

When the flow rate increases, the energy requirement increases as well. In an ideal Forward Osmosis process, CDO and CFI should be equal. Figure 2-3 below shows the schematic diagram of the forward osmosis membrane module. To assess the energy consumption in the FO process, the solution concentrations and flow direction of the module should be determined first. The data supports that the energy required for pumping the draw solution is less than that for pumping feed solution. To determine the effects of the direction of hydraulic pressure in the module, different modules with various concentration solutions and flow rates are designed to compare the energy efficiency. In conclusion, the results demonstrate that to reduce the energy consumption of seawater desalination, the FO module need to optimize these diameters. Also, the flow rates and concentrations of draw and feed solutions play a major role in terms of energy efficiency. The module illustrates that when a high flow rate feed solution is on the shell side and a draw solution with a low flow rate is on the lumen side, the system consumes less energy consumption. Another vital implementation of Forward Osmosis is food concentration/enrichment. Multiple studies concluded that FO is efficient when it comes to dewatering for food production.  Compared to the traditional concentration method, such as pressure driven membrane, FO requires less energy and yields less nutrition loss. Nutrition loss refers to the reduction of monomers fructose here. A closed-loop feed solution and draw solution system are built as figure 2-4 below. Garcia-Castello tested two membranes in the system above. A flat sheet of cellulosic membrane and an AG reverse osmosis membrane. AG membrane refers to a certain designation of membrane manufactured by Sterlitech. The result shows that the AG membrane has a higher salt rejection rate. During the procedure, once the water flux reaches a constant value, a feed stock solution is added to the tank to reach the next feed solution concentration.

At the end of the experiment, the highest feed solution is 1.65M sucrose. By comparing performances of different membranes, the AG membranes yield better results when concentrating on sucrose solution due to its tucker support structure. The temperature also has a significant impact on water flux. Usually, higher temperature yields higher water fluxes. Compared to the concentration factor of RO, FO has a better concentration factor of 5 while it requires much less energy.Fertilizer drawn forward osmosis applies the forward osmotic dilution of the fertilizer draw solutions. This technology could be used for direct agricultural irrigation. Fortunately, most of the fertilizers could be used as a draw solution for FDFO. Fertilizer drawn forward osmosis shares the same principle with forward osmosis. Freshwater as feed solution flows through the semi-permeable membrane to the fertilizer draw solution under the natural osmotic pressure. Additional treatments might be required to reach the water quality for different purposes. Regarding the nitrogen removal purpose for this review, operating conditions such as feed solution concentration, feed solution water flow rate, and specific water flux can affect the effectiveness of nitrogen removal. Fertilizer-drawn forward osmosis has common applications in water recycling and fertigation applications. Nanofiltration is a viable solution for diluting the fertilizer draw solution for recycling purposes. Fertilizer-draw forward osmosis technology has used brackish water, brackish groundwater, treated coal mine water, and brine water as the feed solutions. In another word, water that has a relatively lower total dissolved solid could be feed solution for fertilizer drawn forward osmosis. Moreover, fertilizer drawn forward osmosis is also effective on biogas energy production when it is applied to an anaerobic membrane bioreactor as a hybrid process. In conclusion, fertilizer drawn forward osmosis is effective for sustainable agriculture and water reuse. Its considerable recovery rate could be used as the hydroponics part in an anaerobic membrane bioreactor . Due to the scarcity of fresh water in arid areas, hydroponics has been used for vegetable production. In the field of hydroponics, a subset of hydroculture, crops are cultivated in a soilless environment, their roots are exposed to mineral nutrient solutions or fertilizers. Without soil culture, this type of agricultural production precludes certain aspects that are associated with traditional crops production, including soil pollution, lower fertilizer utilization efficiency, or spread of pathogens. This technology also allows the production of crops in arid, infertile, or simply too populated areas. However, economic cost aside, this technique requires both a large amount of fresh water and fertilizers compared with soil-based crops production. This could easily cause detrimental effects to the environment such as water waste and contamination, excessive nitrogen, potassium, and phosphate resulting in eutrophication. To achieve the balance between cost, efficiency, and quality, reverse osmosis and ultrafiltration are more advanced and general approaches compared to biological seawater treatments. In terms of treating seawater, the hydroponic nutrient solutions demonstrate similar performance compared with other aqueous solutions of a lower molecular weight salt. By utilizing certain membrane technologies, treated effluent has reduced the presence of pathogens and remained the ability to be better integrated into the fertigation system for direct application. The potential of the fertilizer drawn forward osmosis process was investigated for brine removal treatment and water reuse through energy-free osmotic dilution of the fertilizer for hydroponics. Nanofiltration is a pressure-driven membrane process, it refers to a special membrane process that removes dissolved solutes. The membrane is with pores ranging from 1 to 10 nanometers, hence the name “nanofiltration”. Nanofiltration uses a similar principle as reverse osmosis, it is a water purification process that requires pressure,fodder system for sale and its membranes are permeable to ions. Nanofiltration is practical in removing organic substances from coagulated surface water, it is also economic and environmentally sustainable. In terms of size and mass of solvents removed by nanofiltration membranes, they usually operate in the range between reverse osmosis and ultrafiltration: removing organic molecules with molecular weights from 200 to 400. Nanofiltration membranes can also effectively remove other pollutants including endotoxin/pyrogen, pesticides, antibiotics, soluble salts, etc.

Depending on the type of salt, it has various removal rates. For salts containing divalent anions, such as magnesium sulfate, the removal rate is around 90% to 98%. However, regarding salts containing monovalent anions, such as sodium chloride or calcium chloride, the removal rate is lower, which is between 20% to 80%. The osmotic pressure across the membrane is typically 50-225 psi. One of the advantages of Nanofiltration is that it uses lower pressure and sustains higher water flux. Plus, it has highly selective rejection properties. Typical applications for nanofiltration membrane systems include the removal of color , total organic carbon from surface water, reduction of total dissolved solids , and the removal of hardness or radium for well water. In 1952, Congress passed the Saline Water Conversion Act, which is aimed at resolving the shortage of freshwater and excessive use of underground water. Two years after the act, the first desalination plant in the United States was built in 1954 at Freeport, Texas. The planet is still operative to date and is undergoing improvement. U.S. Department of Agriculture predicts to supply 10 million gallons of fresh water per day in 2040. The Claude “Bud” Lewis Carlsbad Desalination is the largest desalination plant in the U.S. The plant delivers almost 50 million gallons of fresh water to San Diego County daily. Due to objective conditions, desalination has prevailing existence in regions such as the Middle East, where the largest desalination plant worldwide stands in terms of freshwater production. With 17 reverse osmosis units and 8 multi-stage flashing units, the plant can produce more than 1,400,000 cubic meters of fresh water per day. In 1960, there were only 5 desalination plants in the world. By the mid-1970s, as the conditions of many rivers deteriorated, around 70% of the world’s population could not be guaranteed sanitary and safe freshwater. As a result, water desalination has become a strategic choice commonly adopted by many countries in the world to resolve the shortage of fresh water, its effectiveness and reliability have been widely recognized. The limitation and uneven distribution of freshwater resources have been one of the most prevailing and serious problems faced by people living in arid areas. To reduce its severity, saline water or wastewater desalination has always been a constantly researched and applied solution. In many arid regions, the desalination of seawater is evaluated as a promising solution. Despite that seawater holds around 96.5% of global water resources , the global-scale application of seawater desalination is hindered by the cost, both financially and energy-wise. With the development of energy-saving technologies for seawater desalination, it is viable to use saline, such as seawater and brackish water to produce freshwater for industries and communities. Commonly used methods require water pumping and a considerable amount of energy. As a result, forward osmosis is receiving increasing interest in this field since the FO process requires much less energy. One of the research teams at Monash University in Australia has demonstrated a solar-assisted FO system for saline water desalination using a novel draw agent. The research team led by Huanting Wang and George P. Simon has investigated the potential of a thermoresponsive bilayer hydrogel-driven FO process utilizing solar energy to produce fresh water by treating saline water. This Forward osmosis process is equipped with a new draw agent: a thermo responsive hydrogels bilayer. Compared to one of the most used draw agents , this duallayered hydrogel is made of sodium acrylate and N-isopropyl acrylamide , which induces osmotic pressure differences without the need for regeneration. The thermo responsive hydrogels layers generate high swelling pressure when absorbing water from high-concentrated saline. During testing, researchers used a solution of 2,000 ppm of sodium chloride, which is the standard NaCl concentration for brackish water. Water passes through the semipermeable membrane and is drawn from saline solution to the absorptive layer . The hydrogel can absorb water up to 20 times larger than its regular volume. Next, the thermo responsive hydrogel composed only of NIPAM then absorbs water from the first layer. When the dewater layer is heated to 32 °C, which is the lower critical solution temperature , the gel collapses and squeezed out the absorbed fresh water. Draw agents like ammonium bicarbonate are required to be heated up to 60 °C, then distilled at a lower temperature for regeneration. By focusing the sunlight with a Fresnel lens, the concentrated solar energy can help dewatering flux reach 25 LMH after 10 minutes, which is similar to the water flux of ammonium bicarbonate. 

Network analysis methods are used to analyze the resulting relational structure of the mental model

Furthermore, 15N-Glu-feeding experiments indicated that tea plants can absorb exogenously applied amino acids that can then be used for N assimilation. In addition, we demonstrated that CsLHT1 and CsLHT6 are involved in the uptake of amino acids from the soil in the tea plant.It has been suggested that tea plants grown inorganic tea plantations are subjected to N-deficient conditions due to the absence of inorganic fertilizer. Compared with conventional tea, that produced under organic management systems contains higher levels of catechins that are linked to antioxidant effects of tea infusions. However, organic tea contains lower levels of amino acids that are also important compounds in terms of tea quality. The decay of large amounts of pruned tea shoots may contribute significantly to soil amino-acid levels inorganic tea plantations; the decomposition of such organic matter and nutrient recycling depends largely on soil fungi. Interestingly, the long-term application of high amounts of N fertilizer was found to reduce soil fungal diversity in tea plantations. This likely could account for why we observed higher amino-acid contents in the organic tea plantation compared with the conventional tea plantation . This implies a more important role for soil amino acids in tea plant grown inorganic tea plantations.It has been reported that, in addition to inorganic N, amino acids can support tree growth. As a perennial evergreen tree species, the tea plant can also use organic fertilizer. However, the role of soil amino acids in tea plant growth and metabolism has not yet been investigated. In this study, we observed that the tea plant could take up 15N-Glu, and Glu feeding increased the aminoacid contents in the roots . This revealed that tea plants can take up amino acids from the soil for use in the synthesis of other amino acids. In our study, nine amino acids were detected in the soil of an organic tea plantation, and the utilization of exogenous Glu was analyzed in detail. In future studies,hydroponic nft it will be important to test the roles of various mixtures of amino acids for use as fertilizers for the growth and metabolism of the tea plant.

The molecular mechanism underlying the uptake of amino acids from the soil by trees has not been thoroughly studied. In this study, we identified seven CsLHTs that were grouped into two clusters, which was consistent with LHTs in Arabidopsis . CsLHT1 and CsLTH6 in cluster I have amino-acid transport activity , which is also consistent with AtLHT1 and AtLHT6. Moreover, these two genes were highly expressed in the roots and both encode plasma membrane-localized proteins . These findings support the hypothesis that CsLHT1 and CsLHT6 play important roles in amino-acid uptake from the soil . However, the members of cluster II, CsLHT2, CsLHT3, CsLHT4, CsLHT5, and CsLHT7, did not display amino-acid transport activity . Interestingly, except for AtLHT1 and AtLHT6, there are no other AtLHTs being shown to transport amino acids. It is possible that cluster II LHTs are involved in the transport of metabolites other than amino acids. For example, AtLHT2 was recently shown to transport 1-aminocyclopropane-1- carboxylic acid, a bio-synthetic precursor of ethylene, in Arabidopsis.LHT1 has been thoroughly characterized as a high affinity-amino-acid transporter and has a major role in the uptake of amino acids from the soil in both Arabidopsis and rice. In contrast, there is only one report on the function of AtLHT6; it is highly expressed in the roots, and the atlht6 mutant presented reduced aminoacid uptake from media when supplied with a high amount of amino acids. Although the authors did not characterize the amino-acid transport kinetics for AtLHT6, their results are consistent with this protein being a low-affinity-amino-acid transporter. In the present study, we characterized CsLHT1 to be a high-affinity amino-acid transporter , with a capacity to transport a broad spectrum of amino acids . By contrast, CsLHT6 exhibited a much lower affinity for 15N-Glu, and it also displayed higher substrate specificity. Considering that amino-acid concentrations in the soil of tea plantations are low , CsLHT1 may play a more important function than CsLHT6 in the uptake of amino acids from the soil into tea plants. However, in soils, amino-acid contents could be much higher, locally, particularly in the vicinity of decomposing animal or vegetable matter. In this situation, CsLHT6 may play an important role in the uptake of amino acids. In addition,CsLHT6 is also highly expressed in the major veins of mature leaves , suggesting a role for CsLHT6 in amino-acid transport within these tea leaves.

Given that protocols for the efficient production of transgenic tea cultivars are lacking, CsLHT1 and CsLHT6 expression cannot be modulated by either over expression or CRISPR/Cas9 gene editing. However, in China, there is an abundance of tea plant germplasm resources. CsLHT1 and CsLHT6 are potential gene markers for selecting germplasms that can efficiently take up amino acids. Moreover, germplasms with high CsLHT1 or CsLHT6 expression can be used as root stocks for grafting with elite cultivars to improve the ability of these cultivars to take up amino acids from the soil. Alternatively, these germplasms can be utilized through gene introgression. These grafted lines that can efficiently take up amino acids or novel cultivars should be better suited for use inorganic tea plantations than in conventional tea plantations.One of the core goals of sustainability science is understanding how practitioners make decisions about managing social-ecological systems . In the context of sustainable agriculture, an important research objective is quantifying the economic, environmental, and social outcomes of different farm management practices . However, it is equally important to understand how farmers conceptualize the idea of sustainability and translate it into farm management decisions. The innumerable and often vague definitions of sustainable agriculture make this a challenging task, and fuel the debate about linking sustainability knowledge to action. This debate will remain largely academic without empirical analysis of how farmers think about sustainability in real-world management contexts. These questions are not only relevant to agriculture, but also to all social-ecological systems and the knowledge networks that are in place to support decision making. This paper addresses these issues by analyzing farmer “mental models” of sustainable agriculture. Mental models are empirical representations of an individual’s or group’s internally held understanding of their external world . Mental models reflect the cognitive process by which farmer views about sustainable agriculture are translated into farm management decisions and practice adoption. Our mental models were constructed from content coding of farmers’ written definitions of sustainable agriculture, and were analyzed using network methods to understand the relational nature of different concepts making up a mental model.

We test three hypotheses about mental models of sustainable agriculture. First, mental models are hierarchically structured networks, with abstract goals of sustainability more central in the mental model, which are linked to peripheral concrete strategies from which practitioners select to attain the goals. Second, goals are more likely to be universal across geographies, whereas strategies tend to be adapted to the specific context of different social-ecological systems. Third, practitioners who subscribe to central concepts in the mental model will more frequently exhibit sustainability-related behaviors, including participation in extension activities and adoption of sustainable practices. Our mental model data were drawn from farmers in three major American viticultural areas in California: Central Coast, Lodi, and Napa Valley. California viticulture is well suited for studying sustainability. Local extension programs have used the concept of sustainability since the 1990s ,hydroponic channel and farmer participation in sustainability programs is strong . Furthermore, viticulture is geographically entrenched , with viticultural areas established on the basis of their distinct biophysical and social characteristics . Hence, we expect wine grape growers to have well-developed mental models of sustainability, with geographic variation reflecting social-ecological context.or group’s internally held understanding of the external world . Group mental models, which are the focus of this paper, represent the collective knowledge and understanding of a particular domain held by a specific population of individuals. Mental models are an empirical snapshot of the cognitive process that underpins human decision making and behavior. Mental models complement more traditional approaches to understanding environmental behavior by highlighting the interdependent relationships among attitudes, norms, values, and beliefs . For example, the Values-Beliefs-Norms model of environmental behavior hypothesizes a causal chain running from broad ecological values, to beliefs about environmental issues, to more specific behavioral norms. The network approach used here shows how these types of more general and specific concepts are linked together in a hierarchical and associative structure. Mental models have evolved into an important area of research in environmental policy, risk perception, and decision making . A growing number of researchers are using mental models to better understand decision making in the context of social-ecological systems . Two approaches that are especially relevant to this paper are Actors, Resources, Dynamics, and Interactions and Consensus Analysis . The ARDI approach uses participatory research methods to construct a group mental model of the interactions among stakeholders, resources, and ecological processes . The final product is a graphic conceptualization of how the group perceives the social-ecological system, its components, and their place in it, which can be used to inform management strategies. The CA approach relies on similar data-collection techniques to elicit a group mental model that captures stakeholders’ beliefs and values pertaining to how the social-ecological system should be managed and for what purpose . The mental models are then analyzed using quantitative methods to assess agreement among individuals and identify points for consensus. Along with addressing research questions about practitioner knowledge and decision making, both approaches have been used to facilitate multi-stake holder management of social-ecological systems . This paper conceptualizes group mental models as “concept networks” comprised of nodes representing unique concepts and ties representing associations among concepts.The concept network approach is different from ARDI and CA in that network analysis methods are used to analyze the structure of mental models and measure the importance of individual concepts based on their position in the concept network. This approach follows from Carley’s work , which is founded in the theoretical argument that human cognition operates in an associative manner .

When a given concept is presented to the individual, memory is searched for that concept, ties between the concept and associated concepts are activated, and associated concepts are retrieved. The more associations a given concept has, the more likely the concept is to be recalled. Highly connected concepts serve as cognitive entry points for accessing a constellation of associated ideas. We elicited our mental models from written text of farmers’ definitions of sustainable agriculture, and follow Carley in arguing that written language can be taken as a symbolic expression of human knowledge . It is important to note that our mental models deviate from Carley’s in that the associations among concepts are nondirectional and do not represent causality between concepts. Ties in our concept network represent concept co-occurrence, where two concepts occurred together in a single definition of sustainable agriculture. See Methods for more details.Hypothesis 1 is that mental models are hierarchically structured, with abstract concepts constraining the cognitive associations among more concrete concepts. For example, practitioners who define sustainability primarily as environmental responsibility versus economic viability may evaluate the benefits and costs of management practices with different criteria. This perspective is related to models of political belief systems where specific attitudes on public policy issues are predicted by general beliefs about policies and core values . Construal-level theory also suggests that hierarchical belief-systems contain abstract, superordinate goals related to subordinate beliefs about actions needed to achieve them . The hierarchical structure reflects a basic principle of cognitive efficiency in taxonomic categorization , where more abstract concepts provide cognitive shortcuts to retrieve specific linked attributes . The concepts making up mental models of sustainability can be divided into two basic types, each with different levels of abstraction: goals and strategies . Abstract goals are desirable properties, attributes, and characteristics of a sustainable system to be realized. Examples taken from this study include environmental responsibility, economic viability of the farm enterprise, continuation into the future, or soil health and fertility. Strategies are more concrete and include practices or approaches that are thought to contribute to the realization of abstract goals.

It was found that rate of cortical death was faster in hexaploid wheat and positively associated with root age

The present study was conducted to address the dosage effect of 1RS translocation in bread wheat. We used wheat genotypes that differ in their number of the 1RS translocations in a spring bread wheat ‘Pavon 76’ genetic background. For generating F1 seeds, Pavon 1RS.1AL was the preferred choice due to its better performance for root biomass than other 1RS lines . Here, we report the dosage effect of a 1RS chromosome arm on the morphology and anatomy of wheat roots. The results from this study validate previous results of the presence of genes for rooting ability on the 1RS chromosome arm. This study also provides evidence for presence of genes affecting root anatomy on 1RS. From previous chapters of this dissertation and earlier studies , it was clear that there was a gene present on 1RS chromosome arm which affects root traits in bread wheat. But there was no report on the chromosomal localization of any root anatomical trait in bread wheat. The purpose of this study was to look for variation in root morphology and anatomy among different wheat genotypes and then determine how these differences are related to different dosages of 1RS in bread wheat. During this study, we came to some very interesting conclusions: 1) F1 hybrids showed a heterotic effect for root biomass and there was an additive effect of the 1RS arm number on root morphology of bread wheat; 2) There was a specific development pattern in the root vasculature from top to tip in wheat roots and 1RS dosage tended to affect root anatomy differently in different regions of the seminal root. Further, the differences in root morphology,hydroponic gutter and especially anatomy of the different genotypes have specific bearing on their ability to tolerate water and heat stress. The effect of number of 1RS translocation arms in bread wheat was clearly evident from their averaged mean values for root biomass. RA1 and RAD4 were ranked highest while R0 ranked at the bottom .

These results supported the previous studies on the performance of wheat genotypes with 1RS translocation where 1RS wheats performed better in grain yield but similar for shoot biomass . Genotype RD2 performed slightly better than R0 for root biomass because of its poor performance in one season otherwise it showed better rooting ability in the other three seasons. Here, all the genotypes with 1RS translocations showed higher root biomass than R0 which carried a normal 1BS chromosome arm. Data in this study suggested two types of effects of 1RS on wheat roots. First, an additive effect of 1RS, there was increase in root biomass with the increase in 1RS dosage from zero to two and then to four . Second was a heterotic effect of 1RS on root biomass and shoot biomass. MPH and HPH of the F1 hybrid were higher for root biomass than for shoot biomass . This further explained the more pronounced effect of 1RS on root biomass than shoot biomass. Significant positive heterosis was observed for root traits among wheat F1 hybrids and twenty seven percent of the genes were differentially expressed between hybrids and their parents . The possible role of differential gene expression was suggested to play a role in root heterosis of wheat and other cereal crops . In a recent molecular study of heterosis, it was speculated that upregulation of TaARF, an open reading frame encoding a putative wheat ARF protein, might be contributing to heterosis observed in wheat root and leaf growth There is large void in root research involving study of root anatomy in wheat as well as other cereal crops. Most of the anatomical literature is either limited to root anatomy near the base of the root or near the root tip in young seedlings . There is still a general lack of knowledge about the overall structure and pattern of whole root vasculature during later stages of the growth in cereals especially in wheat. In the present study, root anatomical traits were studied in the primary seminal root of different wheat genotypes containing different dosages of 1RS translocation arms at mid-tillering stage .

Root sections were made from three regions along the length of the root, viz. top of the root, middle of the root and root tip, to get an overview of the complete structure and pattern of root histology relative to differences in 1RS dosage. Comparison of different regions of root of a genotype showed a transition for metaxylem vessel number and CMX area from higher in top region of the root to a single central metaxylem vessel in the root tip. Diameter of the stele also became narrower towards the root tip as the plant roots grow into deeper layers of soil. In the root tip only central metaxylem vessel diameter and area were traceable as other cell types were still differentiating. This developmental pattern was consistent across the different wheat genotypes used in this study. Interestingly, there was variation in timing for the transitions in root histology among genotypes and this variation was explained by dosage of 1RS arm in bread wheat. RD2 and RAD4 transitioned earlier from having multiple metaxylem vessels and a larger stele to a single, central metaxylem vessel and smaller stele than did R0 and RA1. In the top region, all the root traits were significantly different among genotypes except average CMX vessel diameter and CMX vessel number . Here, the average CMX diameter was calculated from the average of diameters of all the CMX number of that subsequent genotype and hence, the number of CMX vessels, was different in each genotype so was the total CMX vessel area. Interestingly, all the root traits in the top region showed negative slope in regression analysis and most of them were significant especially stele diameter, total CMX vessel area, and peripheral xylem pole number. Variation in all the traits was explained by number of 1RS dosages in wheat genotypes and root traits were smaller with higher number of 1RS dosage . Significant positive correlation among almost all the root traits from topregion and mid-region of the roots suggested their interdependences in growth and development. Root diameter could not be measured for all the replicates of each genotype because of the degeneration and mechanical damage to the cortex and epidermis.

Earlier, a study on the rate of cortical death in seminal roots was investigated in different cereals.In the root tip, only two traits, CMX vessel area and CMX vessel diameter, were traceable because of the status of root tip development . Negative slope and significant R2 value in regression analysis explained the effect of 1RS dosage on the CMX vessel area and CMX vessel diameter. This suggested narrow metaxylem vessels with increase in 1RS dosage . In roots, central metaxylem vessel is the first vascular element to be determined and differentiate . Here, serial cross sections of the root tips also confirmed it as the first differentiated vascular element in wheat. The other vascular components differentiate thereafter in relation to first formed metaxylem vessel . Feldman first reported that all the metaxylems were not initiated at the same level. Root morphology and root architecture are responsible for the water and nutrient uptake while in root anatomy, xylem vessels are essential for their transportation to the shoots to allow continued photosynthesis. Variations in xylem anatomy and hydraulic properties occur at interspecific, intraspecific and intraplant levels . Variations in xylem vessel diameter can drastically affect the axial flow because of the fourth-power relationship between radius and flow rate through a capillary tube, as described by the Hagen–Poiseuille law . Thus, even a small increase in mean vessel diameter would have exponential effects on specific hydraulic conductivity for the same pressure difference across a segment . Xylem diameters tend to be narrower in drought tolerant genotypes ,u planting gutter and at higher temperature . Smaller xylem diameters pose higher flow resistance and slower water flow which helps the wheat plant to survive water stressed conditions. Richards and Passioura increased the grain yield of two Australian wheat cultivars by selecting for narrow xylem vessels in seminal roots. The results of this study showed that the presence of 1RS in bread wheat increased the root biomass and reduced the dimensions of some root parameters especially the central metaxylem vessel area and diameter in the root tip as well as in the top of the root . Manske and Vlek also reported that wheat genotypes with 1RS translocated chromosome arm had thinner roots and higher root-length density compared with normal wheat with 1BS chromosome arm under field conditions. These results might suggest higher root number or extensive root branching in 1RS translocation wheats. Among 1RS translocation wheats, significant association was observed between root biomass and grain yield under well-watered and droughted environments . Narrow metaxylem vessels and higher root biomass provide 1RS translocation wheats with better adaptability to water stress and make them better performers for grain yield. Plant development is particularly sensitive to light, which is both the energy source for photosynthesis and the regulatory signal . Upon germination in the dark, a seedling undergoes a developmental program named skotomorphogenesis, which is characterized by elongated hypocotyl, closed cotyledon, apical hook, and short root. Exposure to light promotes photomorphogenesis, which is characterized by short hypocotyl, open cotyledon, chloroplast development and pigment accumulation . In addition to light, photomorphogenesis is also regulated by several hormones, including brassinosteroid , auxin, gibberellin and strigolactone .

The molecular mechanisms that integrate the light and hormonal signals are not fully understood. Light signal is perceived by photoreceptors, which regulate gene expression through several classes of transcription factors . Downstream of photoreceptors, the E3 ubiquitin ligase COP1 acts as a central repressor of photomorphogenesis . COP1 targets several transcription factors for proteasome-mediated degradation in the dark . Light-activated photoreceptors directly inhibit COP1’s activity, leading to the accumulation of the COP1- interacting transcription factors, such as HY5 , BZS1, and GATA2, which positively regulate photomorphogenesis . Recent studies have uncovered mechanisms of signal crosstalk that integrate light signaling pathways with BR, GA, and auxin pathways . The transcription factors of these signaling pathways directly interact with each other in cooperative or antagonistic manners to regulate overlapping sets of target genes . BR has been shown to repress, through the transcription factor BZR1, the expression of positive regulators of photomorphogenesis, including the light-stabilized transcription factors GATA2 and BZS1 . BZS1 is a member of the B-box zinc finger protein family, which has two B-box domains at its N terminus without any known DNA binding domain . It is unclear how BZS1 regulates gene expression. Recent studies have shown that SL inhibits hypocotyl elongation and promotes HY5 accumulation in Arabidopsis plants grown under light , but the molecular mechanisms through which SL signaling integrates with light and other hormone pathways remain largely unknown. Immunoprecipitation of protein complexes followed by mass spectrometry analysis is a powerful method for identifying interacting partners and post translational modifications of a protein of interest . In particular, research in animal systems has shown that combining stable isotope labeling with IP-MS can quantitatively distinguish specific interacting proteins from non-specific background proteins . Stable isotope labeling in Arabidopsis has been established as an effective method of quantitative mass spectrometry ; however, combination of SILIA with IP-MS has yet to be established. To further characterize the molecular function of BZS1, we performed SILIA-IP-MS analysis of the BZS1 protein complex, and identified several BZS1-accociated proteins. Among those are COP1, HY5, and BZS1’s homologs STH2/BBX21 and STO/BBX24. We further showed that BZS1 directly interacts with HY5, and positively regulates HY5 RNA and protein levels. Genetic analysis indicated that HY5 is required for BZS1 to inhibit hypocotyl elongation and promote anthocyanin accumulation. In addition, BZS1 is positively regulated by SL at both transcriptional and translational levels. Plants over expressing a dominant-negative form of BZS1 show an elongated-hypocotyl phenotype and reduced sensitivity to SL, similar to the hy5 mutant. Our results demonstrated that BZS1 acts through HY5 to promote photomorphogenesis and is a crosstalk junction of light, BR and SL signals. This study further advances our understanding of the complex network that integrates multiple hormonal and environmental signals.