Category Archives: Agriculture

This division of scientific labor transferred directly to the US as well

Lave et al. have proposed the name critical physical geography for research that “combines critical attention to power relations with deep knowledge of biophysical science or technology in the service of social and environmental transformation.” Such work neither oversimpliflies physical geography as “naively positivist” nor seeks to criticize physical geography from the outside. Rather, CPG “requires critical human geographers to engage substantively with the physical sciences and the importance of the material environment in shaping social relations, while expanding physical geographers’ exposure to and understanding of the power relations and human practices that shape physical systems and their own research practices.” The need for CPG, they argue, arises both from the ubiquity of human influences on biophysical processes—reflected in the increasing adoption of the term “Anthropocene”—and from the insight that scientific concepts and ideas are socially mediated or “co-produced” with the landscapes they seek to describe and understand. At the core of CPG lies a “reflexive and integrative epistemological spirit” that strives “to produce critical biophysical and social explanations while also reflecting on the conditions under which those explanations are produced.” CPG thus involves scrutinizing not only the findings but also the concepts and categories of physical geography. These concepts and categories must be understood, moreover, both in terms of their theoretical origins within a discipline and in relation to the broader social and institutional contexts of their production. Scientific categories have histories; they should not be taken for granted as given or natural, but understood as the result of actions taken by particular people in particular contexts. This is especially important in cases where repeated use over time has cemented concepts into the literature and occluded the decisions and assumptions that attended them at the outset.

Such decisions necessarily reflected,in some measure, Grow bag for blueberry plants the social conditions in which they were made, and they very likely rested on assumptions that may have been faulty from the start, or that may have become faulty as conditions subsequently changed. Critical understanding of the history of concepts and categories used by physical geographers is not the only task of CPG; the larger goal is to address contemporary problems and issues more effectively. But it is a necessary part of CPG, insofar as the concepts and categories used today may benefit from a critical examination of their origins and histories. Elsewhere I have treated the terms carrying capacity and anthropogenic in this spirit. This paper explores the origins of key concepts and practices in range science, a field of applied ecology that arose in the United States around the turn of the 20th century. The history of the discipline has received remarkably little attention from scholars, even within range science; critical scrutiny of it along the lines of CPG is virtually non-existent. Prompted by concern that uncontrolled livestock grazing was degrading Western public lands, federal government agencies tasked scientists to find the causes of degradation and devise ways to reverse it. What emerged was a set of ideas about how livestock, herders, herding dogs and wild predators interacted to impact vegetation for better or worse, and a corresponding set of practices that were subsequently implemented across the West’s vast public rangelands: fencing, regulated stocking rates, and predator control. In the centurysince this model was born, the connection between predator control and fencing has become invisible; the history told here allows us to see that rangeland policies might usefully be reconsidered in light of this lost connection.

It also uncovers a key assumption of the logic behind the model—namely, that reduced labor costs would offset the costs of fencing—and it reveals a historical contingency that went on to have profound implications for Western rangelands: the subordination of range science to timber production and therefore fire suppression.On May 9, 1907, the famous naturalist and Chief of the US Department of Agriculture’s Bureau of Biological Survey, C. Hart Merriam, sent a short memo to Gifford Pinchot, head of another USDA agency, the recently created US Forest Service. “Dear Mr. Pinchot: Your proposition to build a wolf and coyote proof fence on the Imnaha2 National Forest in Oregon is of great interest to us, and the Biological Survey will gladly cooperate with the Forest Service in any way possible to secure satisfactory results.” Three sentences later, Merriam—who in his career described more than 600 species of mammals—concluded with a blunt recommendation: “After the fence is completed, all wolves, coyotes, mountain lions and wild cats should of course be killed or driven out before the sheep are brought in.” He made no mention of the purpose of the project, and his own agency’s involvement was quite limited.3 But Merriam took an interest because the project was a scientific experiment, for which the fence was an important apparatus.The Coyote-Proof Pasture Experiment was a joint effort between the Forest Service and a third USDA agency, the Bureau of Plant Industry . Conceived by Pinchot, it was designed by Frederick Coville, Chief Botanist in the BPI, and although it was not the first scientific experiment in range management, as is sometimes claimed, it was the first to be deemed successful, and its results helped transform the very institutions that had produced it. Inspired by this perceived success, the Forest Service permanently took over range research from the BPI in 1910, and the young scientist whom Coville had hired to conduct the experiment, James T. Jardine, became Inspector of Grazing for the Forest Service. Jardine’s collaborator, Arthur Sampson, went on to become “the father of range science.” The Wallowa experiment thus had enormous implications for how rangelands would be studied and managed for the rest of the 20th century.

From four square miles in the mountains of eastern Oregon, a model of range management based on fencing and predator control spread across the rangelands of the western US in a matter of decades. In the second half of the century, the model was exported to the developing world. Fenced pastures and the near-total absence of large predators have by now been ubiquitous on US rangelands for so long that they are widely taken for granted. “Open range” no longer signiflies the absence of fences altogether, but instead their absence along remote roadways, where livestock may imperil motorists without the livestock owner being liable for damages. Coyotes, brown bears and mountain lions persist throughout the West, and wolves and grizzly bears are still found in parts of the northern Rockies, but their numbers are too small to pose a significant threat to livestock: less than 0.25 percent of US cattle, for example, are lost to predators, including dogs.5 Range fences aren’t even designed to repel predators, because doing so would be far too expensive relative to the small risks of predation. Today, fencing and predator control are treated as separate issues in public debate and policy recommendations, and the historical link between them is forgotten. To be sure, both fencing and predator control predated the Coyote-Proof Pasture Experiment, and their combined use on rangelands might be considered coincidental. But Coville and Jardine’s experiment united them in an effort to understand range livestock production scientifically and to use that understanding in the formulation of policies. The inspiration for the experiment lay not in rangelands at all, but in the much smaller pastures of the eastern United States and Europe, where fencing was primarily a means of keeping livestock away from crops, rather than keeping predators away from livestock. To apply the pasture model to the much drier, expansive lands of the Western range, however, blueberry grow bag the Forest Service had to interpret the results of the Wallowa experiment in ways that elided or overlooked many important details. Fencing had to make sense at the scale of thousands or tens of thousands of acres, for example, rather than 1-40 acres. The results with sheep had to be extended to cattle, even though cattle are far less vulnerable to predation. Most importantly, the attribution of causality had to shift from the removal of predators—which putatively reduced livestock trampling of vegetation—to the control of stocking rates by fencing, which came to be viewed as improving the composition and production of range vegetation. The Coyote-Proof Pasture Experiment sanctioned and catalyzed the institutionalization of a set of practices of US rangeland administration and management that presupposed the combination of fencing and predator control. The model rested on weak scientific foundations, as we will see, but it spread for other reasons, enabled by large public subsidies over many decades, especially in the form of labor under Depression-era jobs programs. This is ironic because reducing labor —in the form of herders—was the semi-visible, ulterior motive of both fencing and predator control.The first two decades of the 20th century were a period of ongoing and sweeping reorganization within the USDA, especially with regard to the West’s vast public lands. At the turn of the century, the public lands resided within the Interior Department’s General Land Office, whose principal mandate was to dispose of them under the nation’s various settlement acts. But the Forest Reserve Act of 1891 had authorized the President to withdraw timbered lands from disposal, and as the Forest Reserves grew in size and number, Congress and the Interior Department scrambled to decide how to administer and manage them.

Along with mineral resources and water, the region’s key natural resources were forests and rangelands,both of which were considered to be in crisis due to unrestrained commercial exploitation. The Forest Reserves were justified legally as a means to protect timber and watersheds, but they also encompassed large areas where livestock owners had been grazing their animals for decades or more. The relationship between forests and rangelands, trees and grasses, timber and forage, was at once a scientific, management, and bureaucratic question. Forestry had by this time emerged as a small but recognized scientific field, imported from Europe and first institutionalized at Cornell University in 1898 and at Yale in 1900. The USDA had begun assessments of the nation’s forests in 1876, organized since 1881 under the Division of Forestry, and division chief Bernhard Fernow could point to thousands of pages of published forestry research when he stepped down in 1898 . Research on rangelands, in comparison, had barely begun. It had not attracted the agency’s attention until 1895, when the Division of Agrostology was created in response to devastating drought and consequent overgrazing, especially in the Southwest. Little more than taxonomic and reconnaissance surveys had been completed in the West by 1900 , when Agrostology was combined with five other divisions into the Office of Plant Industry, renamed the following year as the Bureau of Plant Industry. The Division of Forestry also became a bureau in 1901. Without a land base, however, none of the USDA’s various bureaus “could do more than advise and research” . Congress and the Interior Department had been studying and debating the administration of the Forest Reserves since their inception, but the matter was only resolved in 1905, when the reserves were transferred to the USDA and its new US Forest Service, headed by Pinchot and facilitated by his close friendship with President Theodore Roosevelt. As Pyne notes, the Transfer Act catapulted forestry to the forefront of American conservation not on the basis of its scientific credentials—which were quite meager in comparison to, for example, the US Geological Survey—but by virtue of the 63 million acres of land that the Forest Service suddenly controlled. Dedicated to science as the means of striking an optimal balance between utilization and conservation, Pinchot moved quickly to expand and consolidate research on several fronts. But the bureaucratic divisions within USDA persisted, and expertise on rangelands—what little there was of it—remained in the BPI. The European forestry model was ill suited to North American forests because it failed to recognize the importance of recurrent fires for their functioning and persistence, as Pyne , among others, has shown. But in most places, fires were a function not so much of the trees themselves, but of the grasses that grew beneath and between them, providing the fine fuels in which recurrent fires could start and spread. European forestry’s ignorance of—and prejudice against— fire, then, reflected its ignorance of grasses, which in the European context were deemed important only in “improved” pastures that were both spatially and intellectually segregated from forests.

The Japanese knotweed plants are allowed to grow for two years before pre-harvesting steps are taken

Due to its invasive nature, Japanese knotweed has been able to persistently grow across different terrains. According to data retrieved from the University of Georgia’s Center for Invasive Species and Ecosystem Health, Japanese knotweed has been detected in 43 out of 50 U.S. states6 . In addition to Japanese knotweed’s ability to grow across different topographies, it has been demonstrated to grow under harsh conditions which other plants might not. Specifically, a group of researchers conducting Japanese knotweed growth conditions studies describe how they expect the plant to continue growing and producing plant metabolites while placed in low-fertile soil with no irrigation. Currently, there exists numerous published techno-economic analysis studies of plantbased production focusing on biofuels , recombinant therapeutic proteins, industrial enzymes, and antimicrobial proteins for food safety. Here, this thesis will describe a process simulation model for the techno-economic analysis performed of the plant-based production of Japanese knotweed and for the extraction and purification of the biopolymer precursor, resveratrol, which has not been demonstrated before. This study establishes a framework to help inform decisions on the development of a domestic production route for such polymer precursors.Due to Japanese knotweed’s classification as an invasive species and predominant cultivation in China, literature focusing on optimizing its large-scale growth conditions and economics are limited. Luckily, grow bag gardening literature surrounding the production and harvesting of potatoes is vast. Potatoes are a subterranean root vegetables native to the Americas, making them a suitable candidate for modeling Japanese knotweed rhizomes after.

Thus, a techno-economic analysis for the upstream portion of the base case model was performed using data retrieved from UC DavisAgriculture and Resource Economics on potato harvesting. This model considered the different equipment needed, labor costs, land rent, the Capital Expenditure , and Operating Expenditures . The base case scenario assumes an annual production capacity of 100 MT resveratrol. To reach the proposed target, upstream production was modeled using an open-field, staggered plantation of Japanese knotweed plants of about 1,847 acres per batch. Each batch was assumed to have a duration of 2 years . Key assumptions for the proposed farmland in this chapter are listed in Table 2.1 A list of process assumptions used to define certain parameters within the model; FW, fresh weight, N/A, not applicable.In efforts to accurately model the upstream portion of resveratrol production, the cost of land rent for non-irrigated crop land in the U.S. was investigated. According to the USDA, the average rent paid for non-irrigated land in the U.S. is $128.00 per acre14. It determined that nonirrigated cropland for rent was available for $29.00 per acre in the southwest region of South Dakota15. Notably, the cost for non-irrigated land available in South Dakota was less than 77% of the national average. Due to the availability and affordability of farmland, South Dakota was chosen as the state best fit to model knotweed rhizome production in. Using research from the University of Georgia – Center for Invasive Species and Ecosystem Health, it was confirmed that Japanese knotweed can grow in west South Dakota . Next, South Dakota’s farm operations were assessed to determine whether the state would be able to handle the demand needed for resveratrol production. Using the mass of knotweed rhizomes grown per acre per the mass of knotweed rhizomes needed for 100 MT production of resveratrol, our estimates yield a total of 3,695 acres of non-irrigated land needed for suitable growth of Japanese knotweed needed to meet our target production level.

In 2021, the USDA reported South Dakota operating 43.2 million acres of land for harvesting ofcrops such as corn, wheat, soybeans, and sunflower16. Once the presence of Japanese knotweed in the state and available acres for farm operation was confirmed, South Dakota remained a suitable option for domestic production of Japanese knotweed.As mentioned above, certain assumptions were made during the design of the upstream production process model. The first assumption made was the concentration of free resveratrol present in Japanese knotweed rhizomes cultivated within North America. While the growth of Japanese knotweed in North America has been previously reported in scientific literature, the concentration of resveratrol and its glucoside, polydatin, are seen to differ between samples, ranging between two to three orders of magnitude. Further analysis demonstrated the impacts of seasonal variations, available nitrogen in the soil, and other environmental factors such as the presence of insects and fungus18 on the concentration of stilbenes present in Japanese knotweed plants. A variety of sources, shown in Table 2.1, describe an average free resveratrol concentration near 1.4 mg/g FW knotweed rhizome. When data on the total resveratrol, both polydatin and resveratrol, concentration were analyzed , the totalresveratrol concentrations in knotweed was determined to reach an average of 9.8 mg/g FW knotweed rhizome, 7-fold higher than for just free resveratrol. Using this information, the natural field grown Japanese knotweed rhizomes were assumed to contain free resveratrol and polydatin at a 1:3 ratio, specifically in concentrations of 0.5 mg/g FW and 1.5 mg/g FW, respectively. These concentrations and ratios fall towards the conservative range values but still align with the range present in Japanese knotweed.Another assumption made in the upstream process model was that there are no costs attributed to the biological containment of the knotweed within the field.

Notably, it is pertinent to mention here that knotweed has the ability of regenerating itself from small pieces of pre-existing rhizomes, as small as half an inch in length. Data provided by New Hampshire’s Department of Agriculture suggests it contains allelopathic properties causing it to release chemicals to eliminate native vegetation. Reports on Japanese knotweed growth have reported its ability to spread vertically for 10 ft and horizontally about 40 ft. Michigan’s Department of Natural Resources published an article mentioning knotweed rhizomes’ capability to penetrate depths of 7ft in certain soils . A bulletin written by researchers at Montana State University reported cases of Japanese knotweed creating monotypic stands while disrupting infrastructure like concrete in the process. No preventative measures were modeled although these may be necessary to fully contain the Japanese knotweed from spreading outward from the dedicated land for growing it. The model can instead be interpreted to have the fallow land surrounded by a deep and wide trench along its perimeter at minimum additional cost. As stated above, the practices required for upstream production of naturally grown Japanese knotweed followed potato harvesting practices listed by UC Davis Agriculture and Resource Economics Center. It is imperative to mention that the data provided on potato cultivation were grown in the intermountain region of California, along the Klamath Basin and not Southwest South Dakota as this model emphasizes. Nevertheless, the practices, equipment, labor costs, and investments are assumed to analogous for such production. The breakdown of the equipment, production practices, plastic grow bag and costs are described per practice in Table 2.4. First, the model assumed the acres of land needed for Japanese knotweed production had to undergo some preparation prior to planting. Here, 80% of the fresh acreage is assumed to be chopped using a heavy stubble disc and then any residual crops remaining after the initial cutting is mixed in with the soil using a ring roller. Once the mixing is complete, 50% of the acreage is assumed to undergo deep ripping in efforts to alleviate any harden soil. The first preharvesting procedure to be taken is the spread of a desiccant over some Japanese knotweed plants. The use of the desiccant is to prevent any further growing of the plant’s tops.

Notably, this step is a method used for potatoes and may not be suitable for Japanese knotweed production. While this step may not be required, it is applied in efforts to dry out the invasive Japanese knotweed found above ground. Here, the desiccant is only applied to 50% of the acreage using an aircraft. Once the desiccant has been added, the beds and vines of the Japanese knotweed plants are rolled and cut. The next step the model incorporates is the harvesting step. The Japanese knotweeds are dug up, harvested, and field cleaned in one step using a single tractor attached to a power take off driven four-row digger. The knotweed rhizomes which are harvested are then assumed to be placed in a 15-ton bottom-conveyor truck designated for transporting the rhizomes to a storage facility. Here, the truck is stationed and moved besides the harvester in the open-field to capture rhizomes as they are harvested. The transportation of the knotweed rhizomes is assumed to only be a 10-mile round trip from the field to storage facility. The transportation costs are shown below in Table 2.4. Once the trucks hauling the knotweed rhizomes arrive at the storage facility, the rhizomes are moved via a conveyor located on the back of the trucks onto a large holding tub where they are washed to remove any soil present. While downstream processing may only require the rhizomes stay in storage for a short time, storage fees were still included within our estimates. The total operational cost for the upstream portion was calculated to be slightly under $1.1 million dollars per year. A breakdown of the economics for each category is as follows. The labor rates used in the model were matched with the values used by the UC Davis Agriculture and Resource Economics Center. Specifically, the wage for a machine operator at $20.00 and $14.00 for general labor, including an overage charge of 37%. These values were understood to be the average industry rate as of January 2015 and were not updated for 2022 values. Within the model, the fuel, lube, and repair cost for each practice was estimated by multiplying the hourly operating cost for each piece of equipment for the selected practice by the hours per acres deemed necessary for potato harvesting. The hours needed per practice, cost of fuel , and repair cost used for this model was retrieved by the provided by UC Davis Agriculture and Resource Economics Center. The value used within their report is described as coming from calculations from the American Society of Agricultural Engineers and data from the Energy Information Administration, Department of Energy. The only material cost set when modeling the upstream portion was the cost of the desiccant. Cost for a desiccant for an acre of knotweed were aligned with the cost per acre to produce potato-chippers. Only two practices incorporated any custom costs, the pre- and post-harvest steps. The cost was attributed to the cost to operate the aircraft to spread desiccant and any cost which may be incurred when storing knotweed rhizomes.In addition to costs of each practice, a breakdown of the cash overhead was also shown in Table 2.4. Field sanitations described within the table refers to any sanitation services provided to laborers in the fields, such as portable bathrooms and hand washing areas. A single field supervisor is assumed to be managing the operations within the model. The wage was set to $57 per acre. Land rent values for unirrigated land in South Dakota were used as mentioned above. Liability insurance, the standard policy which is designated to help manage any expenses which may arise if an individual may sustain any bodily injuries while on the property, was set to $1 per acre. Notably, crop insurance is also an additional standard insurance provided to open-field growers which may provide coverage in the case of an unavoidable loss of crops. No crop insurance was estimated or used within the modeling of the upstream production of knotweed rhizomes. The next expense is attributed to any office expenses. Here, office expenses refer to any office supplies, telephones, road maintenance, booking and accounting and legal fees which may be incurred during production. The value was also aligned with the values listed by the UC Davis Agriculture and Resource Economics Center. Property insurance is an additional expense which was included in the cash overhead cost. Simply, property insurance accounts for any property loss and is charged at $1 per acre. The last expense is equipment investment repairs. Here, the repairs cost is associated with the annual preventative maintenance, set to $4 an acre. Once the cash overhead cost was calculated, the total annual operating cost or OPEX for the upstream production was estimated to be $1.4 million.

Progression of PD symptoms has been monitored under controlled conditions

As an analysis of infection persistence via detection assays, we tested the effects of variety and the interaction between treatment and sampling date on the count of plants that tested positive using a generalized linear mixed model with binomial error using the package glmmTMB and the function glmmTMB .A total of 6,236 SNPs were detected as being significantly associated with the host Vitis using the Bonferroni correction for multiple hypothesis testing with p-values <0.05. However, this still does not exclude the possibility of significance via phylogenetic proximity across the tree. Given the few clades of Vitis-associated bacteria, this dataset did not offer the power to use the most conservative method used by Scoary, the worst pairwise comparison p-value, which would identify only SNPs that have arisen independently across the phylogeny. However, data presented here , show genes that have a significant corrected P-value, and also have a best pairwise comparison P <=.125, showing some indication of independent emergence. 22 SNPs fit those criteria. Out of those 22, there were 9 genes in which several SNPs within them emerged as significant, likely due to linkage disequilibrium. Amongst the then total of 13 genes identified with significant SNPs, 5 are only identified to the level of hypothetical protein. Most genes that had multiple SNPs detected as significant also had identical frequencies of those SNPs across the populations, showing high linkage of those sites. Due to this, only one SNP is shown per gene and included in the enrichment tests. The 8 non-hypothetical significant genes are azu , carA, , clpP , nadD , mutY , ubiJ , exbD_1/2 , IpxD3/4 . KEGG orthologies show that 5 of the 8 identified and named genes are classified in the metabolism family in terms of their molecular functions . All identified SNPs were present at lower frequencies in strains from Vitis than other hosts, square black flower bucket wholesale with a mean frequency of 0.047 in Vitis derived strains and a mean frequency of 0.49 in non-Vitis derived strains.

Gene gain and loss was also tested using Scoary for host associations and the same pvalue corrections to go from 473 genes with Bonferroni p <0.05 to 37 genes that also had a best pairwise comparison p <= 0.125 . Out of the 37 identified genes, 33 are only identified as hypothetical proteins. The four named genes are xerC_1/2 , hcaB , mdtA_1/2/3 , and a cluster identified using Panaroo that includes the three genes CnrA , swrC_2 , and acrF_2 .These genes each belong to a different KEGG orthology .Information about X. fastidiosa ssp. fastidiosa infections in coffee in Central America is still limited – little symptomatology is associated with the disease; however, there are persistent infections in the field . The results of this work suggest that the host jump included adaptation of the X. fastidiosa strains to Vitis and lessened the ability of the U.S. strains to infect C. arabica. However, the ability for the pathogen to move throughout the C. arabica plants’ xylem vessels demonstrates a higher infectivity than expected in a fully resistant plant. Movement of the strains away from the inoculation point was observed, which shows the ability of the bacteria to successfully degrade the pit membrane, which is often a barrier to colonization. However, in contrast to that finding, there was a reduction of positive-testing C. arabica plants over the course of the year after the inoculation. This demonstrates a lower infectivity in C. arabica than the strains have to V. vinifera where infections in a greenhouse are sustained post-inoculation. This could demonstrate a reduction in the ability to create chronic infections in C. arabica by the Vitis-adapted strains. It is possible that the chronic infections of X. fastidiosa in C. arabica plants observed in situ may be caused by high rates of re-inoculation by insect vectors rather than strain level variation in infectivity. That is not the case for ssp. fastidiosa in V. vinifera, just one inoculation is sufficient for high virulence, and only cold winter temperatures have been known to cure infections that are otherwise chronic . The virulence of the U.S. strains to C. arabica is not as high as to V. vinifera, as shown by the lack of severe symptoms in C. arabica.

Given that, this study does not offer evidence that the California ssp. fastidiosa strains are generally more virulent than the ancestral strains in Central America. While not a likely scenario, there is a possibility that instead of experiencing adaptation to a specific host, the introduced strains became generally more virulent, which has been hypothesized about the globally spreading ssp. pauca strain infecting olive in Italy . In this paper, we present hypervirulence as a possible scenario, but as it is not supported by the data, we were able to rule it out. Symptom development in infected C. arabica plants consists of minor stunted growth, compared to the severe leaf scorch, matchstick petioles, shriveled fruit, and often plant death that occurs in V. vinifera. There are records of virulence of X. fastidiosa in C. arabica, such as evaluation of ssp. pauca in Brazil. In inoculation experiments with those strains, C. arabica may still develop symptoms slowly, and the proportion of positive plants does also reduce, however after 8 months the percentage positive was around 30% , not entirely dissimilar to the results from this project, which extends some uncertainty about this system. All methods used detected genetic signatures of adaptation, and many genetic candidates were identified by multiple methods. We were able to identify genes and SNPs associated with the host Vitis as well as genes under positive selection in strains isolated from Vitis. These genes included many hypothetical proteins, however genes with known functions pertaining to infections by X. fastidiosa were also identified.The genes identified by multiple methods included those whose functions in X. fastidiosa have been previously investigated, while many still have unknown functions . ClpX, the gene for the ATP-dependent protease ATP binding subunit, was previously identified as being upregulated 4x during induction of biofilm condition , a physiological state that is vital for virulence and vector colonization . Mutations in the copper related gene copA have been found to drastically alter copper tolerance in X. fastidiosa, which is vital for agricultural survival given the frequency of copper in treating fungal infections in the vineyard, a fungicide that has been in use since the 18th century . It is conceivable that after a host jump into a vineyard, it would be necessary for pathogens to survive higher levels of copper exposure. DegP has been found to be upregulated upon heat shock in X. fastidiosa . TolB encodes for a translocation protein involved in membrane integrity and has been shown to be important for biofilm development, an important aspect of pathogenicity . These are among a suite of other genes that are both hypothetical proteins, or just understudied in X. fastidiosa but show evidence of being involved in the process of this climate and host shift.

Now that these genes have been identified, they are prime candidates for targeting in future experiments to determine their effects on host range and climate adaptation. This study also includes one hypothesis that lies outside the general narrative of the introduction event, namely the evaluation of infectivity of ssp. multiplex towards C. arabica, which has not been tested before. While in California, ssp. multiplex has never been found infecting grapevine in the field, it has been shown to generate non persistent infections, similar to what we observed in C. arabica, in the greenhouse . Recently, infections of grapevine by ssp. fastidiosa have been detected in the field in Virginia . While the ssp. multiplex infections were not highly virulent, plastic square flower bucket they were just as persistent as the ssp. fastidiosa strains. All three main subspecies of X. fastidiosa are able to infect C. arabica to some degree . In conclusion, we have identified a suite of genes that are related to a host switch to Vitis with a corresponding reduction in the ability to infect an ancestral host. These data support the hypothesis that the shift was not a host range expansion of the subspecies, but a reduction of ability to infect a former host while optimizing the ability to infect a new host species.The results of in-situ experiments can differ greatly from those conducted in controlled laboratory conditions, especially in applied biological systems. In pathology specifically, there are often ethical and regulatory hurdles that prevent the introduction of disease-causing pathogens into naïve systems, whether it is for human health, food security, or in natural systems. Researchers are therefore typically constrained to using controlled or model systems that are ethically, financially, and logistically feasible. For example, mice are often used to understand human disease, and quarantine greenhouses are used to study plant disease . These proxy systems are imperfect, as we intuitively know that there are differences between a mouse and a human, so there are differences between the potted plant on your windowsill and the towering redwoods of California. These differences are intuitive, but also biological as the effects of interactions between organisms and their complex environments are impossible to recreate, and these environmental and physiological differences directly affect the outcomes of research in plant pathology . The chances to experimentally infect plants in accurate conditions are few and far between, so rare opportunities provide critical research windows. One such system that suffers from a lack of realistic experimental control is Pierce’s disease of grapevines , a persistent burden to the vineyards of California since the introduction of the etiological agent into North America in the late 1800s . The pathogenic bacterial strains that colonized Vitis vinifera and cause PD have since spread to Taiwan, Europe, and the Eastern United States . Although the relationship between Xylella fastidiosa and PD has been understood since 1978 , this study is the first to document the progression of symptoms in mature commercial grapevines under field conditions over the course of several years with a known inoculation time and location on the plant. Despite the challenges of conducting this work in-situ, predominantly concern from the proprietors about infection spreading, we have made insights that counter much of the classic understanding of this disease system and deepen our understanding of symptom development and bacterial multiplication and movement in PD. Several studies have documented PD, which collectively create expectations for the progression of this disease. The pathogen is known to move through mature grapevines at least fast enough to enter the cordon from the shoot within the first year of infection . However, given that removal of the cordon does not reduce disease severity in subsequent years , the pathogen must be moving further into the plant, presumably into the trunk during the first year. It is also well established that there are differences in response to X. fastidiosa infection based on Vitis vinifera cultivar, with some cultivars more susceptible than others . Symptoms are presumed to first appear late in the season that infection occurred, except in especially susceptible cultivars, such as Chardonnay, in which they can arise sooner. Characteristic symptoms have been described as leaf scorch, uneven lignification of the shoots, shriveled berries, and “matchstick” petioles, a situation in which the leaf blade becomes detached from the petiole. In this system, disease symptoms and pathogen infection do not always persist through the winter, a phenomenon called overwinter curing. However, the mechanism for recovery has not yet been determined, and current explanations range from pathogen temperature susceptibility to plant defensive responses. Expectations for overwinter curing are both temperature and cultivardependent, and in Napa Valley, the expected recovery rate for early-season inoculations is around 30% . However, such data may not translate to plants grown in the vineyard for various biotic and abiotic reasons. Greenhouse conditions do not account for the complex nutritional, pest, and other biotic  and abiotic pressures of the field , and greenhouse assays of grapevines are typically conducted with an excised shoot, rather than a mature plant. Observational studies are similarly limited because they often miss aspects such as recovery, asymptomatic infections, or unexpected symptoms by not relying on controls, but instead seeking out expected symptoms. These studies have given us a view on characteristics of PD and its progression that we were able to experimentally test.

Resowing of these strips can provide extended resources and also help reduce the occurrence of weeds

However, the long-term effects of non-native plant species on pollinator populations are not well known. Invasion, which leads to plant species declines and losses in resource heterogeneity, may negatively impact forager biodiversity, as seen in other systems . Overall, these studies suggest that non-native species play varying roles in pollinator networks, depending on their ability to provide foraging resources and their impact on the native plant community.Comparing interaction networks before and after an event can tell us more about the maintenance of pollination services than typical biodiversity studies can. Unfortunately, though empirical research on the spatial and temporal variation of plant–pollinator networks is badly needed, the lack of historic data and the intensive sampling effort required to identify multiple empirically gathered networks has limited research in this area . Only a few empirical network studies have specifically examined how habitat alteration impacts network architecture. Forup and Memmott compared pollinator networks for old intact hay meadows and restored hay meadows , and found no significant difference between the two in terms of plant or insect species richness or abundance, but did find that old meadows had a slightly higher proportion of potential links between plants and pollinators. In a second study, Forup et al. examined ancient and restored heath lands and found that, while the plant and pollinator communities were similar, the interaction networks were significantly less complex, in terms of connectance values, in the restored heathlands. These results suggest that even in ‘restored’ human-altered landscapes supporting similar levels of species diversity, flower harvest buckets the complexity of plant– pollinator interactions may not be easily recreated, and this may ultimately limit the long-term persistence of plant and pollinator communities.

In communities with high degrees of network complexity, such as the species-rich plant and pollinator communities of the tropics , network recovery post human-alteration may be less likely. Most remaining studies have examined plant–pollinator interactions over time within the same sites, and these have largely focused on intra- and inter-annual variations in network dynamics . Studies comparing networks within a single year have often found substantial species turnover in composition, emphasizing the need to consider plant–pollinator networks for shorter and more biologically relevant time periods . One study that examined plant and pollinator interactions on a daily basis, also found pronounced species turnover, and found that the most connected species, and thus perhaps the most resilient species, were those with the longest flowering– foraging periods . Studies that have examined variation in pollinator networks across multiple years have also found a large degree of turnover in species composition, but have surprisingly found that the number of plant and pollinator species, connectance, degree of nestedness, and modularity were conserved over the years . Overall, these studies indicate that plant–pollinator systems are dynamic, but that pollinators are flexible in resource use, potentially making networks more resilient to climate change. Furthermore, they indicate that high levels of connectance and nestedness allow for functional redundancies in the network, and greater potential resilience to climate change-induced biodiversity loss. However, research on pollinator networks over multiple years is sorely needed , specifically research which examines how habitat alteration and environmental change impacts complex and spatially explicit pollinator network architectures . These future studies will greatly improve our understanding of environmental change impacts on pollinator community dynamics.As mentioned earlier in the chapter, plants and pollinators provide a number of critical ecosystem services. Throughout this chapter, we have discussed research indicating that alterations in local and regional climate can disrupt plant and pollinator phenology, potentially leading to population and community alteration. In our discussion of pollinator networks, we have further shown that simulated alteration of plant and pollinator phenology can lead to marked changes in community-level interactions.

The consequences of these population-level and community-level alterations on ecosystem services could be various, and include potential changes in the quantity, quality, spatial availability, and temporal availability of ecosystem services. Unfortunately, research that directly examines the impact of the various dimensions of local climate change on pollination service acquisition is rare to nonexistent. In the following paragraphs, we discuss how potential outcomes of warming or warming and drying scenarios, specifically reduction in the abundance and diversity of pollinators, may impact ecosystem services provided by wild plants and native pollinators.The impact of pollination disruption on wild plant communities and the ecosystem services they provide is potentially wide-ranging, but largely understudied . Though more than 75% of wild plant species are dependent on insect pollination for reproduction , the impacts of this dependency on community or population level ecosystem services are not clear. Most existing studies have focused on single-species analyses of wild plant reproductive success across varying habitat treatments . A recent meta-analysis of these studies has found that self-incompatible pollinator dependent plant species exhibited greater declines in fragmented habitats than self-compatible plant species , and cross studies, the effects of fragmentation on pollinators were highly correlated with the effects on plant reproduction. Both of these findings suggest that pollination limitation could be a key driver for wild plant population decline. Of the wild plant species studied, 62–73% show pollination limitation , and though the long-term consequences of pollen limitation on population growth are uncertain , simultaneous declines in native plant and pollinator populations suggest a link between these two patterns . Thus, wild plants may face declines if their pollinators exhibit climate-induced spatial or temporal change, or general population decay. Biodiversity loss in wild plant communities can have devastating effects on ecosystem services because wild plants are critical for ecosystem processes in both natural and humanaltered landscapes.

Aside from providing humans with food, medicines, fuel, and construction materials, wild plants also support important processes in agricultural, rural, and urban landscapes, such as pest-predation , nitrogen fixation , erosion control , water filtration and storage , and carbon sequestration . Lastly, wild plants provide habitat needed for the migration of important seed dispersers and serve as plant propagule reservoirs for the recolonization of disturbed habitats . Thus, wild plants are critical for the function and regenerative capacity of natural and human-altered landscapes, and their decline would undoubtedly reduce the depth and range of ecosystem services they currently provide.As discussed in the introduction of the chapter, animal pollination is important for crop production and contributes to the stability of food prices, food security, food diversity, and human nutrition . An estimated 15–30% of the American diet depends on insect pollination and globally, the cultivation of pollinator-dependent crops is growing . Thus the loss of pollinators, without strategic market response, could translate into a production deficit of an estimated 40% for fruits and 16% for vegetables . These studies all suggest that climate-induced pollinator declines or disruptions to crop pollination could result in the alteration or reduction of food quantity, quality, diversity, availability, and nutritional content, potentially compromising global food security.A number of options exist for improving conditions for pollinators and buffering disruption of pollination interactions and general biodiversity loss. Unfortunately, very little research on pollinator restoration has been conducted specifically in the context of climate. In the following paragraphs, we present mitigation strategies that have been developed with respect to other types of environmental change, round flower buckets as they serve as key starting points for climate-specific restoration strategies. Though many of the practices for pollination restoration are similar, restoration projects can vary in their specific objectives and thus may have different concepts of restoration success . In particular, we focus on local and regional habitat mitigation strategies that are aimed at increasing the abundance and diversity of native pollinators, but also briefly discuss the challenges and opportunities for better developing pollinator restoration practices in the context of climate. Generally, the best insurance for protecting pollination services in the face of any alteration in local and regional climate involves maintaining or restoring high abundances and diversities of wild pollinators, their food plants, and their nesting resources throughout their current and predicted geographical ranges.Research on local habitat restoration strategies is the most well studied area of pollinator conservation and includes a wide range of on-site practices, such as the sowing flowering strips and installation of hedgerows. Pollinators are dependent on both flowering and nesting resources . Thus, it is essential to consider pollinator nesting and floral resource needs while deciding on the location, size, configuration, and longevity of the restoration. When considering the selection of plants to include in the local restoration, it is also critical to consider nectar and pollen needs of the target pollinator community across their foraging periods .

Some studies suggest the strategic planting of ‘framework’ and ‘bridging’ plants, which respectively, provide resources necessary for supporting large pollinator numbers and provide resources during resource poor time periods . Bridging plants may become even more important, if there is a mid-summer decline in floral resource availability associated with warmer conditions . Furthermore, it is important to consider the facilitative and competitive interactions between the plants within the restoration in order to select a mix that optimizes resource availability for pollinators, as well as, reproductive capacity for the plants themselves . For the restoration location, field margins are the most commonly utilized areas within agricultural areas , because they are usually not planted with crop plants and are often long and linear, easing the process of sowing, planting, and weeding. Within crop fields, field margins and adjacent lands, flowering strips, especially those that include non-legume forbs , are a low cost method to provide pollinators with floral resources. These flowering strips have been shown to increase the abundance and diversity of native bees and butterflies for at least a single season, often more . If a longer term restoration is preferred, hedgerows that include woody perennial plants can potentially provide both nesting and floral resources .Regional habitat restoration strategies for pollinator conservation include the preservation of unmanaged natural habitat and modifications of existing practices on human-managed lands. A number of studies have shown that the preservation of natural habitat within agricultural areas can lead to higher pollinator abundances, richness, and pollination services for adjacent crops . Furthermore, the presence of remnant habitats can be critical for the colonization of recently restored habitats . Human-altered regional habitat can also be used to support pollinator populations, if managed appropriately. Minimizing grazing and cutting of grasslands can increase regional floral resource availability and insect nest site availability . Pasture that is infrequently grazed can provide bee populations with important floral and nesting resources , and the reduction of fertilizer application, in conjunction with reduced grazing, has been shown to provide improved habitat for a number of butterfly species . Whether natural habitat is preserved or human-managed landscapes are modified for pollinator conservation, it is essential to consider the role of habitat restoration in supporting essential regional pollinator dispersal and migration processes, which may vary depending on pollinator community . A number of spatial simulation models of pollinator restoration have shown that the best habitat restoration design for pollinator persistence and pollination service was strongly influenced by the foraging behavior of the target pollinator species . Thus, restoration should keep in mind the dispersal capacities of target pollinator species . For example, for highly mobile species, the restoration processes can consider creating a ‘stepping stone’ habitat , whereas dispersal limited species may need more contiguous linear corridors of high-quality habitat to facilitate movement through inhospitable matrices. In fact, within agricultural settings, plant populations connected by corridors or highly biodiverse matrices have been shown to participate in extensive pollen transfer. Thus, habitat restoration that facilitates pollinator movement has the potential to support improved pollination services across natural and human-altered landscapes, particularly in light of current and plausible future changes in local and regional weather patterns and climate.The habitat-restoration strategies discussed in this chapter provide only indirect options for buffering global climate change; however, the act of increasing pollinator abundance and species richness in a community, at the least, increases the probability that a community or population can persist in altered conditions. Increased population densities and gene flow levels usually lead to populations with greater adaptive genetic diversity . These genetically diverse populations are more likely to be comprised of individuals genetically more suited to altered habitat conditions.

Blueberry consumption is increasing, which is encouraging increased production

Numerous studies on Arabidopsis and cereal crops have advanced our understanding of starch biosynthesis in leaf and endosperm, and this knowledge has been applied to starch quality improvement in agronomical crops. On the contrary, the functions of starch in diverse horticultural crops are poorly understood, but it may play an essential role in their postharvest quality. SBEs largely determine starch composition and function , and there are three major classes of SBEs across cereal and horticultural crops . Compared to the well-studied SBE1 and SBE2, the function of the emerging SBE3 isoform in horticultural crops remains unknown . Although SBE3 has less invariant catalytic residues compared to SBE1 and SBE2 , the gene structure of the SBE3 is highly conserved and as is the protein secondary structure, including the critical CBM48 module . A unique coiled-coil region may provide SBE3 with a distinctive role in starch metabolism as an ‘accessory protein’ through forming protein complexes with core starch biosynthetic enzymes. SBEs in leafy greens, tubers and roots, and fruits show divergent transcriptional patterns during organdevelopment . The activity of SBEs may influence postharvest quality of these crops, influencing starch digestibility to sugars and hence its ability to serve as an energy source during storage, thereby affecting shelf-life. The proportion of sugars affects tissue osmotic properties, and if sugars levels are optimal at the crucial stage of postharvest life, this may reduce wilting, thereby boosting the visual appeal of leafy greens. Upon consumption, the proportion of sugars available in fruit vs. that used for respiration, black plastic plant pots bulk or that remaining as starch, could influence taste, i.e., sweetness and nutritional attributes.

Therefore, modulation of SBEs in major edible organs of these produces could test these hypotheses, and broaden our understanding of tissue- and species-specific starch metabolism, and potentially improve the postharvest attributes of several horticultural crops.Highbush blueberries , native to the northeastern United States, are important commercial fruit and are the most planted blueberry species in the world . In the United States, blueberries traditionally have been grown in cooler northern regions; however, the development of new southern cultivars with low chilling-hour requirements has made possible the expansion of blueberry production to the southern United States and California .Blueberry production in California was estimated in 2007 at around 4,500 acres and is rapidly increasing. Common southern cultivars grown include ‘Misty’ and ‘O’Neal’, but other improved southern highbush cultivars are now being grown from Fresno southward, such as ‘Emerald’, ‘Jewel’ and ‘Star’ . Southern highbush “low-chill” cultivars are notable for their productivity, fruit quality and adaptation , and require only 150 to 600 chillhours, making them promising cultivars for the San Joaquin Valley’s mild winters . Since 1998, we have conducted long-term productivity and performance evaluations of these cultivars at the University of California’s KearneyAgricultural Center in Parlier . North American production of highbush blueberry has been increasing since 1975, due to expansion of harvested area and yields through improvements in cultivars and production systems. In 2005, North America represented 69% of the world’s acreage of highbush blueberries, with 74,589 acres producing 306.4 million pounds . Acreage and production increased 11% and 32%, respectively, from 2003 to 2005. The U.S. West, South and Midwest experienced the highest increases in acreage. In 2005, 63% of the world’s production of highbush blueberries went to the fresh market. North America accounts for a large part of global highbush blueberry production, representing 67% of the fresh and 94% of the processed markets .

As a result, fresh blueberries are becoming a profitable specialty crop, especially in early production areas such as the San Joaquin Valley . In general, a consumer’s first purchase is dictated by fruit appearance and firmness . However, subsequent purchases are dependent on the consumer’s satisfaction with flavor and quality, which are related to fruit soluble solids , titratable acidity , the ratio of soluble solids to titratable acidity, flesh firmness and antioxidant activity . Vaccinium species differ in chemical composition, such as sugars and organic acids. The sugars of the larger highbush blueberry cultivars that are grown in California are fructose, glucose and traces of sucrose. Lowbush blueberries — which are wild, smaller and grow mostly in Maine — lack sucrose. The composition of organic acids is a distinguishing characteristic among species. In highbush cultivars, the predomi- – nant organic acid is usually citric , while the percentages of succinic, malic and quinic acids are 11%, 2% and 5%, respectively. However, in “rabbiteye” blueberries the predominant organic acids are succinic and malic, with percentages of 50% and 34%, respectively, while citric acid accounts for only about 10% . These different proportions of organic acids affect sensory quality; the combination of citric and malic acids gives a sour taste, while succinic acid gives a bitter taste . In addition to flavor, consumers also value the nutritional quality of fresh fruits and their content of energy, vitamins, minerals, dietary fiber and many bioactive compounds that are beneficial for human health . Fruits, nuts and vegetables are of great importance for human nutrition, supplying vitamins, minerals and dietary fiber. For example, they provide 91% of vitamin C, 48% of vitamin A, 27% of vitamin B6, 17% of thiamine and 15% of niacin consumed in the United States . The daily consumption of fruits, nuts and vegetables has also been related to reductions in heart disease, some forms of cancer, stroke and other chronic diseases. Blueberries, like other berries, provide an abundant supply of bioactive compounds with antioxidant activity, such as flavanoids and phenolic acids . For example, a study performed in rats showed that when they were fed diets supplemented with 2% blueberry extracts, age-related losses of behavior and signal transduction were delayed or even reversed, and radiation-induced losses of spatial learning and memory were reduced .

Some studies have shown that the effects of consuming whole foods are more beneficial than consuming compounds isolated from the food, such as dietary supplements and nutraceuticals. Because fruit consumption is mainly related to visual appearance, flavor and antioxidant properties, we decided to evaluate fruit quality attributes, antioxidant capacity and consumer acceptance of the early-season blueberry cultivars currently being grown in California. We characterized the quality parameters of six southern highbush blueberry cultivars grown in the San Joaquin Valley for three seasons , and evaluated their acceptance by consumers who eat fresh blueberries.Field plots. For the quality evaluations at UC Kearney Agricultural Center, we used three patented southern highbush blueberry cultivars — ‘Emerald’ , ‘Jewel’ and ‘Star’ , and three non-patented cultivars — ‘Reveille’, ‘O’Neal’ and ‘Misty’. The plants were started from tissue culture and then grown for two seasons by Fall Creek Farm and Nursery in Lowell, Ore. Before planting these cultivars in 2001, the trial plot was fumigated to kill nut grass . Because blueberries require acidic conditions, the plot’s soil was acidified with sulfuric acid, which was incorporated to a depth of 10 to 12 inches with flood irrigation, resulting in a pH ranging from 5.0 to 5.5. A complete granular fertilizer was broadcast-applied at a rate of 400 pounds per acre . The plants were mulched with 4 to 6 inches of pine mulch and irrigated with two drip lines on the surface of the mulch, one on each side of the plant row. Irrigation frequency was two to three times per week in the spring and daily during June and July. The emitter spacing was 18 inches , procona system with each delivering 0.53 gallon per hour of water acidified with urea sulfuric acid fertilizer to a pH of 5.0. The plot received an application of nitrogen in the first season, as well as in subsequent growing seasons. The rate was 80 pounds nitrogen per acre at planting, 60 pounds the second year, 90 pounds the third year and 120 pounds the fourth year. Annual pest control was limited to one application of Pristine fungicide in February for botrytis management, and two or three herbicide treatments of paraquat . In year three, the plants received one insecticide treatement of spinosad for thrips management. Twenty-eight plants per cultivar were planted in a randomized block design using seven plants per block as an experimental unit, replicated in four rows. Rows were spaced 11 feet apart, with the plants in the rows spaced 3 feet apart, with a space of 4 feet between plots. Fruit was harvested at times when it would have been commercially viable if it had been in a commercial field. Fruit from each of the seven plant blocks was harvested and a composite sample of 80 random berries per each replication was used for quality evaluations. Quality measurements. Berries were randomly selected from each replication for quality evaluation at the first harvest time for each respective season . During the 2007 season, in addition to the initial quality evaluations, harvested berries were stored at 32°F in plastic clam shells, and measured for firmness 15 days after harvest and for antioxidant capacity 5, 10 and 15 days after harvest. Three replications per cultivar were measured for each quality parameter. The initial firmness of 10 individual berries per replication was measured with a Fruit Texture Analyzer  . Each berry was compressed on the cheek with a 1-inch flat tip at a speed of 0.2 inch per second to a depth of 0.16 inch and the maximum value of force was expressed in pounds force . Sixty berries per replication were then wrapped together in two layers of cheesecloth and squeezed with a hand press to obtain a composite juice sample.

The juice was used to determine soluble solids concentration with a temperature-compensated handheld refractometer and expressed as a percentage. Twenty-one hundredths of an ounce of the same juice sample was used to determine titratable acidity with an automatic titrator and reported as a percentage of citric acid. Some samples that had a high viscosity were centrifuged with a superspeed centri-fuge at 15,000 rpm for 5 minutes, in order to get liquid juice for soluble solids concentration and titratable acidity measurements . The ratio of soluble solids concentration to titratable acidity was calculated. Antioxidant analysis. Antioxidant capacity was measured in the 2005 and 2007 seasons. Eighteen hundredths of an ounce of berries per replication was used to determine the level of antioxidants by the DPPH free-radical method . Samples were extracted in methanol to assure a good phenolic representation, homogenized using a polytron and centrifuged for 25 minutes. The supernatant was analyzed against the standard, Trolox, a water-soluble vitamin E analogue, and reported in micromoles Trolox equivalents per gram of fresh tissue . Consumer tests. An in-store consumer test was conducted on ‘Jewel’, ‘O’Neal’ and ‘Star’ blueberry cultivars in 2006, and on the six blueberry cultivars studied in 2007, using methods described previously . The fruit samples were held for 2 days after harvest at 32°F prior to tasting. One hundred consumers who eat fresh blueberries, representing a diverse combination of ages, ethnic groups and genders, were surveyed in a major supermarket in Fresno County. Each consumer was presented with a sample of each blueberry cultivar in random order at room temperature, 68°F . A sample consisted of three fresh whole blueberries presented in a 1-ounce soufflé cup labeled with a three-digit code. At the supermarket, the samples were prepared in the produce room out of sight from the testing area. For each sample, the consumer was asked to taste it, and then asked to indicate which statement best described how they felt about the sample on a 9-point hedonic scale . Consumers were instructed to sip bottled water between samples to cleanse their palates. Consumer acceptance was measured as both degree of liking and percentage acceptance, which was calculated as the number of consumers liking the sample divided by the total number of consumers within that sample . In a similar manner, the percentage of consumers disliking and neither liking nor disliking the sample was calculated. Statistical analysis. Quality values and data on degree of liking were analyzed with analysis of variance and LSD mean separation with the SAS program. Blueberry cultivar performance Production. Among the studied cultivars, ‘Emerald’ and ‘Jewel’ had the highest productivity for 2005 to 2007 . However, ‘Star’ had an unexpectedly high productivity in 2007. Yield increases for all varieties were due to the maturity of the plants. At planting, the tissue-culture plants were 2 years old; as they matured, they all produced larger yields.

A significant shift in the ratio of F to P marks a recombination breakpoint

The xylem waters sampled in this study provided a series of snap-shots of plant water over the course of the growing season at five northern experimental catchments. This resulted in an unusually rich comparative data set allowing a meta-analysis of inter- and intra-site similarities. Some clear findings emerged from this inter-comparison, though there remain many unanswered questions. The close link to soil water at each site was apparent from the similar positions of xylem water when plotted in dual isotope space . However, for most sites, much of the xylem water tracked towards lower δ2 H and δ18O plotting below the meteoric water line and below the soil water samples. The swexcess was shown to be a helpful metric to describe the dynamics of the deuterium offset of xylem waters compared to soil water. For some sites, there was much less or no overlap for gymnosperms or some angiosperms . The results also showed seasonal variations in xylem composition at most sites, although this differed . The plotting positions of xylem water from angiosperms and gymnosperms were quite distinct at some sites, despite some overlap. Apart from Dry Creek, gymnosperms at most sites were more offset from both the LMWL and soil waters compared to the angiosperms.The operationally-defined boundary polygon analysis provided an objective way of comparing the distribution of the soil and xylem data from the five sites . It is notable that the sites with greatest general overlap between all sampled angiosperm xylem waters and soil waters are characterised by smaller shrubs and trees . That said, larger trees at Dorset also showed quite a high degree of overlap, especially for more depleted, plastic growers pots potentially snowmelt-recharged water sources earlier in the growing season. In contrast, Vaccinium at Krycklan showed little overlap.

However, the physiology of smaller plants, with shorter rooting systems, lower internal storage and more rapid water throughput rates may at least partly explain the greater coherence between xylem water and soil water. Indeed, previous ecohydrological modelling experiments at Bruntland Burn by Kuppel et al. and calibrated only on hydrometric data, found quite good agreement between simulated and observed soil water and xylem δ2 H values in angiosperm using the spatial distributed EcH2O-iso model. Conversely, the same model failed to simulate the xylem isotopes in gymnosperms . The polygon analysis at most sites also seemed to indicate that overlaps between soil and xylem waters reflected integrating effects of water sources across the rooting zone, which at most sites was relatively shallow . This is consistent with the conclusions of Amin et al. for northern sites in their global meta-analysis that found isotopic evidence that cold region plant water was sourced from shallower depths compared to more temperate and arid regions. Given the groundwater isotope data available at all sites apart from Dorset, there is little evidence that deeper water sources can help explain the xylem samples not potentially related to soil water sources . Furthermore, at Dorset the thin soil cover overlies what seems to be relatively unfractured bedrock. It is possible that some trees have roots that are tapping water held in fractures, but given the geology it is unlikely that there is sufficient storage to sustain a significant fraction of evapotranspiration.It is clear that some of the observed changes in xylem water throughout the growing season are related to phenological changes . This temporal correspondence partly reflects the “switching on” of plants in the spring as photosynthesis and transpiration increase as well as the availability and isotopic composition of soil water.

Previous work by Sprenger et al. showed that variations in soil water isotopic composition at the study sites were mainly driven by precipitation and snowmelt over the preceding weeks, although there was also an effect of evaporation on kineticfractionation of isotope ratios during summer. These dependencies highlight the importance of precipitation frequency and intensity, infiltration, soil wetness and the mixing interactions that govern soil water residence time distributions . The way in which processes and interactions relate to plant demand highlights the importance of the temporal integration of root uptake and water transport into the main plant stems. The non-stationary travel times from uptake to transpiration may average many months , with tailing in the travel time distribution potentially a result of plant-stored water contributing to transpiration under dry conditions and possible mixing of xylem water with other plant water . The temporal trajectory of the xylem waters varied relative to soil water through the growing season, but this differed between angiosperms and gymnosperms. Also, inter-site contrasts between the angiosperm and gymnosperm differences were apparent: For Bruntland Burn, soil and xylem water signals were most similar in spring, deviated more strongly in summer and then returned to greater overlap in autumn for angiosperms. However, this was not the case for gymnosperms which showed dissimilarity throughout the year. For angiosperms at Dorset, there was a degree of overlap to start with, but depletion increased through summer and then closed again in autumn. In contrast, gymnosperm xylem waters became more 2 H- and 18O-enriched. At Dry Creek, there was a large difference through the autumn and winter for both angiosperms and gymnosperms until spring, but compositions became increasingly similar in summer. At Krycklan, angiosperms were most similar in the spring and early summer, but became increasingly different as the summer progressed. At Wolf Creek, there was an offset at the beginning of spring but samples then increasingly converged.

This post-winter offset, also evident at Dry Creek, may relate to desiccation and/or diffusion within the plant during the biologically inactive period .Inclusion of longer antecedent periods for soil isotope data generally improved overlaps within the boundary polygons for most sites, especially for angiosperms. The “sampling window” over which soil water may have been a source for plant uptake and contributed to xylem water in the trunk at breast height is unknown, and is likely to be non-stationary given seasonal variations in soil moisture and plant physiology. However, the greater overlaps for the longer antecedent period would support the hypothesis that xylem water at any point in time represents an integrated sample of soil water accumulating over preceding months, rather than soil water on the sampling day which will be most influenced by the most recent rainfall. In this sense, the results are similar to those of Allen et al. who demonstrated that trees throughout Switzerland predominantly use soil water derived from winter precipitation for summer transpiration. In our study, however, findings across sites and plant species were not consistent. Regardless, results from both studies suggest that caution should be used when constructing conceptual models of how plants access soil water based on synoptic, space-based sampling. Our phenologically-timed sampling strategy, particularly at such high latitude sites, is novel. However, more frequent sampling would likely be advantageous providing more nuanced insights into the phenological controls and short-term dynamics of xylem isotopes, particularly in relation to short term soil moisture dynamics and periods of higher atmospheric moisture demand . Nevertheless, higher-frequency sampling will still likely show that the xylem samples indicate stronger fractionation which has been widely shown for many vegetation types around the world . This focuses attention on potentially fractionating processes linked to small-scale interactions at the root-soil pore interface, especially close to the soil surface where most fine roots are present and where labile nutrients are also highest inacidic, organic soils. However, blueberry in pot methodological issues may at least partly explain some of the difference. These are discussed in the following section.Dry Creek stands out as an anomalous site in many results, most of which can be explained by its warm, dry conditions and high seasonality. Wolf Creek, however, the coldest site, shares similar results. The two sites obscure an otherwise clear relationship between plotting position along the GMWL and the mean annual temperature , they show the most overlap between xylem and soil water isotopes in bulk and at various depths , and they have the highest negative lc-excess values for both xylem and soil water . They also have the lowest May-August relative humidity at 38% and 63%, as well as precipitation at 19mm and 44mm, for Dry Creek and Wolf Creek, respectively . The relatively dry conditions shared by both sites expose soil waters to sustained evaporative environments, which may cause hydro-patterning of roots . Roots grow where water is available, which tends to be in less conductive pores where water has longer residence times and likely more isotopic fractionation due to evaporation. This evaporatively-enriched soil water also has limited potential for mixing with isotopically-different incoming precipitation that would alter its isotopic composition, partly because the growing-season precipitation at these sites is low.

Accordingly, plant roots in dry environments have fewer soil water source options, so xylem water and bulk soil water will trend towards similar isotopic compositions.Recent research shows that various complex bio-physical processes in the soilplant-atmosphere continuum may help explain why xylem water at the VeWa sites cannot be fully explained by soil water sources . As noted above, one possibility is that exchange between the soil liquid and vapour phase is complex and may affect root water uptake. This may be either through roots being able to access a fractionated vapour phase and/or condensation onto soil surfaces from the soil atmosphere increasing the likelihood that plants take up water depleted in heavier isotopes, especially deuterium. Both recent field and modelling studies have highlighted the plausibility of such mechanisms, but mechanistic studies to test such a hypothesis are limited and urgently needed. Similarly, the complex interactions in the symbiotic relationship between mycorrhiza and plant roots cause uptake of more 2 H- and 18O-depleted water compared to bulk soil water. In particular, widespread arbuscular mycorrhizal fungi which penetrate the cortical cells in the roots of vascular plants may be an effective mechanism that can facilitate fractionation of root water uptake . This occurs as part of the complex symbiosis of nutrient exchange that also affects plant-water relationships and is focused in the upper soil horizons. Such mycorrhizal interactions are particularly important in nutrientpoor minerogenic northern soils, and may have strong effects at sites like Bruntland Burn, Dorset and Krycklan. Again, more specific process-based studies are required to test this hypothesis in contrasting soil-plant systems. Finally, diffusion and evaporation through bark may be important biophysical processes, especially during winter when there is no transpiration . This is potentially a factor in northern regions where winter conditions preclude transpiration but can expose vegetation to arid conditions with high wind speeds and low humidity at sites like Dry Creek and Wolf Creek . Isotope transport through bark may explain why the gymnosperms at Dry Creek showed much greater overlap with the isotopic composition of soil water sampled over a range of antecedent intervalsin spring compared with Bruntland Burn, Dorset, and Krycklan where there was very little overlap. However, this inter-site difference was less pronounced for angiosperms .Extraction of vegetation and soil water: We do not fully know what kind of vegetation water is mobilized by the cryogenic extraction, although it is usually assumed to characterise xylem water. However, it is likely that some of the water that gets extracted is part of live cells subject to potentially fractionating biophysical processes that are independent of the hydrological cycle. Zhao et al. saw large differences between xylem sap, extracted with a syringe, and twig water extracted via cryogenic extraction with the former being more enriched in 2 H compared to the latter. In such cases, differences in the ratio of cell water to xylem water, which would depend on soil wetness, could have an effect on the differences between the isotopic composition of plant water and cryogenically extracted water . Barbeta et al. support this interpretation and call for more specific characterisation of what is assumed to be extracted xylem water. Very recent experimental work by Chen et al. showed that cryogenic extraction can enhance deuterium exchange with organically bound water and contribute to the deuterium depletion. Moreover, they showed the effect can be greatest under more moisture-limited conditions which may explain the tendency for more negative swexcess values as sites become drier. Physiological and biochemical differences between angiosperms and gymnosperms may also contribute to differences in extraction effects . As with vegetation water extraction, differences from contrasting soil extraction techniques may explain some of the mis-match between observed xylem water and soil sources. For example, the similarities between soil and xylem water at Dry Creek involved cryogenic extraction of soils, whereas all other sites used equilibration.

The experiment was performed in triplicate within each of three biological replicates

Does S. carpocapsae prefer or naturally navigate towards milkweed roots or milkweed-feeding insects by using CGs or other chemicals as cues? Drosophila was previously shown to be susceptible to all three of these EPN species . Larvae were fed a non-toxic diet or a diet containing the purified CG ouabain . We chose the hydrophilic CG ouabain since we could deliver millimolar levels of this CG into this non-CG-sequestering insect via its diet to mimic the high CG levels that can be found in the hemolymph of monarch caterpillars without needing levels of a solvent such as DMSO that would have adverse effects on insects and nematodes . Fly food containing ouabain was created by preparing Nutri-Fly food packets in a flask. Once cooled, 15 mM of ouabain was added to the flask. Fly food was poured into vials and allowed time to cool, then stored at 4 °C. Non-toxic fly food, prepared as described above without the addition of ouabain, served as the control. Six to eight adult males and six to eight adult females were placed into each vial and allowed to mate for 3–4 days. Adults were removed and larvae were left to hatch. Twelve-well plates were prepared with filter paper or NGM agar and one second-instar fly larva in each well. Twenty EPNs in 10 µL of water were then added to the wells with fly larvae that were either feeding on nontoxic or ouabain-containing food. Plates were covered with parafilm. Larval mortality was recorded at 2, 12, 24, and 48 h post infection to assess whether CGs influence the ability of EPNs to kill their insect hosts. The experiment was performed in triplicate.Asclepias curassavica seeds were germinated in seedling trays containing organic planting mix . Seedlings were maintained in a growth chamber at 26 °C with a 16 h light / 8 h dark phase at a light intensity of 200 µM m–2 s–1 until the first true leaf was observed.

Seedling roots were then thoroughly rinsed with water to eliminate soil particles. Subsequently, plastic planters bulk the roots were flash-frozen in liquid nitrogen and pulverized into a fine powder using a mortar and pestle. The resulting powdered root tissues were then dissolved in three volumes of 5% methanol solution and centrifuged at 10,000 g for 10 min. The resulting supernatant was carefully removed, and the pellet was suspended in H2O before being stored at 4 °C until further use .Chemotaxis plates were set up according to a previously published protocol . Chemotaxis agar media was poured into small Petri dishes . Then, using a pipette, a small crater-like shape was created in the middle, forming a higher level of agar referred to as the volcano deck. IJs were exposed to wax worm host cuticle for a duration of 20 min to allow host stimulation, which allows for higher participation rates of EPNs . A quantity of 20 µL of root extract was placed onto the volcano deck, followed by the addition of 4 µL of the paralytic agent sodium azide at 0.5 M. The NaN3 solution was made by adding 500 µL of 1-M stock to 500 µL of milliQ water. The paralytic agent was used to visualize the EPNs’ initial directional movement. Subsequently, a suspension containing 100–200 IJs in 20 µL of H2O was carefully dispensed around the perimeter of the deck slope. Plates were then stacked into groups of three in opposite orientations, placed in a box with a lid on a vibration-resistant platform and stored in the dark for 24 h. After 24 h, the numbers ofIJs on and below the deck as well as the number of IJs that displayed a coiling phenotype were recorded. The experiment was performed in triplicate.Sand was autoclaved and then washed repeatedly with tap water followed by DI water. The assays were performed in olfactometers , the setup of which has been described previously .

The measurements of the pipes and glass were as follows: each tube measured 9 cm in diameter and 5 cm in length, the pipe was 7 cm in length and 6 cm in height, and the entire set-up was 15 cm in length. Sand was dried at 60 °C overnight and moistened to 12% with tap water. Filter paper was spotted with 20 µL of tap water as a control or with 20 µL of test solution, and then placed at the end of the tubes. A consistent weight of 28 g of sand per tube was used for each replicate and trial. Root extracts were protected from light. Prenol is a known repellent for EPNs and served as a negative control test solution . A 2-M solution of prenol was made by adding 203 µL of prenol to 797 µL of DI water. Asclepias curassavica milkweed root extracts were prepared using the method described above. The region near the middle of the olfactometer consisted of sand moistened to 12% with tap water, which was the same as the conditions on the control side of the olfactometer to ensure that no biases were introduced. One thousand IJs in 100 µL of H2O were carefully dispensed into the center of the olfactometer. IJs were collected from fresh white traps followed by host stimulation prior to each experiment. For host stimulation, three wax worms were placed on Petri dishes with filter paper and the IJs were allowed to interact with host cuticle for 15–20 min before being collected and used . Each olfactometer was placed horizontally on a foam pad to suppress any vibrations. These were then placed in the dark in random orientations to avoid any potential directionality biases. Each assay ran for 24 h before the caps were removed from each side of the olfactometer separately. Nematodes were collected using the Baermann funnel technique followed by counting their numbers. Each replicate had three biological replicates for each condition and each EPN species. Choice percentages were calculated by counting the number of nematodes in the control area or the test area, dividing these by the total number of IJs inoculated, and multiplying the result by 100.

The experiment was performed in triplicate within each of three biological replicates. The figures were graphed using means across biological replicates, and a chi-squared analysis was conducted on the average of each replicate, with the number in the control treatment serving as the expected value and the number in the test treatment serving as the observed value.Chemotaxis plates were prepared as described previously : 17 g agar was dissolved in 1,000 mL dH2O and autoclaved for 30 min; this was followed by the addition of 5 mL filThered potassium phosphate buffer, 1 mL filThered MgSO₄, and 1 mL filThered CaCl₂. Plates were left at room temperature for 12 h before the experiment. EPNs were collected from fresh white traps to ensure healthy IJs were used in assays. IJs were collected and washed with DI water before 500 µL of IJ suspension at a density of one IJ per µL was placed onto a wax worm. IJs were given 20 min to have contact with the host cuticle for host stimulation . They were then collected and left at a density of eight IJs per µL, collection pot ready to be used for chemotaxis assays. A 2-M solution of the EPN repellent prenol was made by adding 203 µL of prenol to 797 µL of DI water. A tetrahydrofuran solution was made by adding 14.4 µL of THF to 985.6 µL of DI water. THF is a known attractant for S. carpocapsae and S. feltiae . A 0.5-M solution of the paralytic NaN3 was made by adding 500 µL of 1-M stock to 500 µL of milliQ water. Ouabain solutions of 15 mM or 100 µM were made by dissolving ouabain in DI water with 0.3% DMSO. Templates for chemotaxis assays were printed and placed under each chemotaxis plate. On the test side, 5 µL of chemical solution was placed in the test circle. On the control side, 5 µL of DI water with 0.3% DMSO was placed in the control circle. Chemicals were given 15–20 min to diffuse. Then, 2 µL of 0.5 M NaN3 was placed in each scoring circle. A 15-µL suspension of IJs at a density of 5 IJs per µL was placed in the center of the plate, containing a total of 70–170 nematodes. Plates were then stacked into groups of three in opposite orientations, placed in a box with a lid on a vibration-resistant platform and stored in the dark. The assay was run for two hours, after which data was taken on where nematodes were found: the test side, the control side or in the middle. Choice percentages were calculated by counting the number of nematodes in the control area or the test area, dividing these by the total number of IJs inoculated, and multiplying the result by 100. The figures were graphed using means across biological replicates, and a chi-squared analysis was conducted on the average of each replicate, with the number in the control area serving as the expected value and the number in the test area serving as the observed value.Plant functional traits have proved useful in identifying life history strategies for predicting plant community assembly and for assessing the impact of vegetation composition and diversity on ecosystem functioning . Consequently, vegetation models including coupled climate–vegetation models benefit from a better representation of plant trait variation to adequately analyse terrestrial biosphere dynamics under global change .

Today, in combination with advanced gap-filling techniques , databases of plant traits have sufficient coverage to allow quantitative analyses of plant form and function at the global scale . Analysing six fundamental traits, Díaz and colleagues revealed that essential patterns of form and function across the plant kingdom can be captured by two main axes. The first reflects the size spectrum of whole plants and plant organs. The second axis corresponds to the ‘leaf economics spectrum’ emerging from the necessity for plants to balance leaf persistence against plant growth potential. The concept of a global spectrum of plant form and function has since been investigated from various perspectives . It has been shown, for instance, that orthogonal axes of variation in size and economics traits emerge even in the extreme tundra biome or at the scale of plant communities . However, it remains unclear whether the two axes remain dominant for extended sets of traits or when differentiating among growth forms. A particular knowledge gap is what environmental controls determine these two axes of plant form and function. There is ample evidence that large-scale variation of individual plant traits is related to environmental gradients. Early plant biogeographers suggested that climate and soils together shape plant form and function but could not propose a more precise theoretical framework describing these fundamental relationships. Over the last decades, examples have thus accumulated without an overall framework in which to place them . For instance, tree height depends on water availability while leaf economics traits depend on soil properties, especially soil nutrient supply, as well as on climatic conditions reflected in precipitation . Leaf size, leaf dark respiration rate, specific leaf area , leaf N and P concentration, seed size and wood density, all show broad-scale correlations with climate or soil . It has also been reported that many of these traits show latitudinal patterns . Generalizing such insights is, however, not trivial, as soil properties partly mirror climate gradients, as a consequence of long-term soil formation through weathering, leaching and accumulation of organic matter—processes related to temperature and precipitation ; however,climate-independent features reflecting geology and surface morphology also contribute to soil fertility . Soil may furthermore buffer climate stresses; for example, by alleviating water deficit in periods of low precipitation . Combining the insights suggests that the global spectrum of plant traits reveals two internally correlated orthogonal groups and that many plant traits are individually linked to environmental gradients, we expect that both trait groups should closely follow gradients of climate and soil properties. Here, we investigate to what extent the major dimensions underpinning the global spectrum of plant form and function can be attributed to global gradients of climate and soil conditions; and to what extent these factors can jointly or independently explain the global spectrum of form and function.

The colonies grown on these plates were used for primer quality control

Host 1 samples also exhibited some variation in the presence Prevotella OTU006, whereas Host 2 and Host 3 did not. Concern over the validity of PCA on relative abundance data prompted us to perform PCA again on the transforms of the relative abundance data. For this part of the analysis, we used both the centered-log-ratio transformation and the isometric log-ratio transformation. Figure 31 shows the PCA results on the CLR transform of the non-rarefied relative abundances. Compared to the clustering before CLR transformation , samples Hosts 2 and 3 spread out much more while samples from Host 1 clustered together more tightly . Color-coding samples by preservation conditions revealed no apparent trend , as was the case without CLR transformation . The Veillonella OTU still contributed greatly to the separation of Host 1 samples from Host 2 and Host 3 samples. However, after CLR transformation, Prevotella OTU006 appeared much more influential in accounting for the differences between Host 1 and Hosts 2 and 3. PCA on the ILR transformation of relative abundances yielded similar groupings, albeit with different scaling and different directions for the individual components . In the PCA results of the ILR transform, two components helped account for the differences observed between Host 1 samples from Host 2 and Host 3 samples while other components accounted for most of what was left of the sample variation. In the PCA results of both the ILR and CLR transformations, the first two principal components together accounted for 57% of the total sample variation.

As an additional quantitation of the extent of the influence exerted on the sample compositions by different preservation conditions and different hosts, blueberry containers we conducted ANOSIM significance tests with Bray-Curtis distance measures. We found that the relative abundance differences were not greatly influenced by preservation condition, as evidenced by the low correlation coefficients of 0.05505 and 0.1287 for non-rarefied and rarefied relative abundances, respectively. The compositional differences, as we had already observed in PCoA and PCA, were apparently influenced to a much greater extent by host differences, with correlation coefficients of 0.3365 and 0.3147 for non-rarefied and rarefied relative abundances, respectively. Hence, we numerically confirmed that host-based variation contributed the most to the observed differences across all samples.In the last two decades, more and more effort has been devoted to researching the effect of various preservation methods on complex microbiome samples. Particularly close attention has been paid to the gut microbiota, as the therapeutic potential of faecal microbiota transplantation has increased. Many ways of preserving both natural and artificial human gut microbiota have been investigated, including cryopreservation, lyophilization, and long-term storage in commercial storage media. Unlike the gut microbiome, however, the preservation of the oral microbiome does not seem to have received nearly as much attention despite our increasing understanding of its crucial role in human health and disease. Until five years ago, not much has been explored regarding the effects of storage methods on the stability of native human oral communities, let alone the stability of in vitro models.

It was one of our goals, in this set of experiments, to begin probing the effects of refrigeration and glycerol-assisted cryopreservation on communities derived from healthy hosts and generated in an in vitro environment. For this set of experiments, we chose an incubation time of 72 hours based on the results from the temporal experiments, where we observed a transition from the dominance by Streptococcus OTU genus to the dominance by the Veillonella genus. This incubation time seemed a good middle ground for capturing as many core members of the native oral bacterial community as possible without excessive internal contamination. Unlike in the temporal experiments, however, the relative abundances of the 72-hour cultures were not inclined toward Veillonella OTUs except for Host 1 . In fact, the Streptococcus OTU remained dominant in Hosts 2 and 3 throughout the preservation and propagation processes while dominance in Host 1 samples oscillated between Streptococcus and Veillonella OTUs except for one set of propagated samples that contained much higher proportions of Prevotella than others . It is not entirely clear whether preservation monotonically decreased or increased the relative abundance of any singleOTU. What is clear is that the combination of culturing and preservation procedures seemed to drive the community toward what we termed an “attractor” composition unless a substantial presence of the Veillonella genus already existed. In cultures with visible Veillonella presence, the relative abundances after preservation and propagation varied quite greatly, even within the same host and same preservation conditions. In terms of the effect of preservation on community composition, we observed that preservation alone did not lead to drastic changes in the relative abundances of the initial culture for any host. Members of the Streptococcus genus seemed to respond particularly well to glycerol-assisted cryopreservation as well as refrigeration, evidenced by the relatively minor changes in the abundances before and after preservation .

The Veillonella OTUs seemed to respond less well, as their relative abundances decreased upon propagation . Perhaps members of the Veillonella genus are less robust toward environmental changes, and the consequent decrease in the absolute biomass of the Veillonella OTUs in these experiments helped emphasize the increase in the relative abundances of Prevotella and Streptococcus OTUs. In any case, we clearly see in Figure 29 that in all hosts, community compositions pre- and post-preservation were remarkably similar. The preservation conditions we chose helped retain a substantial quantity of how OTUs were distributed in each sample. Thus, these conditions would be valuable for assessing community compositions in experiments where immediate processing may not be possible. On the other hand, the propagation of preserved cells seemed to preferentially select for Streptococcus OTUs, perhaps because this genus already occupied somewhat high proportions of initial cultures. Furthermore, it seemed that at least in Hosts 2 and 3, the Veillonella OTU did not respond as robustly to preservation as the Streptococcus OTU, hence contributing to the increase in relative abundance of at least one Streptococcus taxon. Another very plausible explanation for the shift to the Streptococcus genus is thatthe propagation was simply not long enough – we chose to incubate the preserved samples for 48 hours instead of 72 hours like the incubation for the initial cultures, and the difference of 24 hours might have allowed us to observe a rise in the relative abundance of Veillonella in the propagated cultures. However, we cannot conclude that increasing incubation time would indubitably allow us to see such a shift, especially in light of the observation that the relative abundances of the Veillonella OTUs had already begun to increase noticeably by the 48-hour mark in the temporal experiments for all three hosts . By contrast, only the propagated cultures in Host 1 showed observable presence of Veillonella OTUs, and only one pellet from the 1.5-week cryopreservation retained the presence of this genus. The differences between the relative abundances of the initial/preserved cultures on one hand and the propagated cultures on the other imply that different taxa respond differently to preservation, i.e. the number of viable cells after preservation differs for different OTUs, even if the cells were still intact and their DNA could be extracted. It is also possible that a few procedural changes contributed to the absence of Veillonella, including the step aspiration of approximately 1.5mL liquid from the wells during incubation, which we only introduced into the preservation experiments and not into the temporal experiments. This step, an attempt to minimize internal contamination, could have essentially removed a means by which the sedimented culture was re-inoculated during incubation. These factors and more may be worth investigating in future experiments should we aim to produce propagated communities with compositions that would be similar to those of the initial and preserved communities.

As for the composition of the attractor community and its relationship with different OTUs, two Streptococcus OTUs and one Veillonella OTU seemed to sit at the center of the attractor. Interestingly, the Prevotella and Alloscardovia taxa persisted through both preservation and propagation in Host 1, implicating their roles in a different attractor composition, perhaps one that is more developed than the attractor observed in Host2 and Host 3. The presence of these two taxa may hold special significance for human health given that members of both taxa have been linked to diseased states in the oral cavity. Perhaps the composition of the attractor community changes as the environment is primed for later colonization, potentially by pathogenic species. There has certainly been evidence that organisms of the Prevotella genus may be dependent on other organisms such as those in the Fusobacterium genus, which have also been shown to coaggregate with Veillonella and implicated in oral diseases. Whether the taxa unique to Host 1 cultures would truly compose part of a developed/separate attractor community would need to be investigated further in future experiments. As for the principal components that contribute to the total variation in the data set, best indoor plant pots neither of the log-ratio transformations eliminated underlying biological correlations or averaged out real biological differences. What the transformations did was mitigating some of the positive bias seen in the PCA results of untransformed relative abundance data. The log-ratio transformations yielded PCA results similar in kind to those from PCA of the untransformed relative abundance data. The separation of Host 1 samples from Hosts 2 and 3 persisted across both transformations, and the major components that contributed to host-based differences – the Streptococcus, Veillonella, and Prevotella OTUs – remained mostly the same before and after transformation. However, the degree of separation was much diminished post-transformation, and contributions from smaller but still important components, such as the Alloscardovia OTU and an additional Streptococcus OTU, surfaced upon transformation . Furthermore, the CLR-transformed PCA clearly showed that preservation conditions did not fundamentally influence compositional differences, whereas this lack of influence was not entirely evident in the untransformed PCA. Rather than preservation conditions, it may be the differential organismal responses to these conditions that exerted the greatest influence on sample variation, and the differential responses may well be connected to microbial interactions – perhaps ones similar to those between Veillonella and Strepto-coccus species or between Streptococcus and Actinomyces species – that affect the robustness of an organism toward low-temperature, desiccation, or nutrient depletion stresses, such as those occurring during preservation. The interaction-related responses would then fundamentally depend on the community composition before preservation, just as salivary Veillonella species depend on a specific strain of Streptococcus for coaggregation. We will attempt to investigate composition-based differential responses to preservation in the next phase of this project. Returning to a point made in a previous section, we observed in the relative abundances of the mock microbial community that the DNA extraction process seemed to generally favor Gram-negative bacteria , particularly E. coli and S. enterica, while the sequencing process seemed to favor B. subtilis at the cost of P. aeruginosa. These results underscore the importance of choosing proper bacterial strain should we ever revisit bacterial cell spike-ins for quantitation purposes. Ideally, the spikein organism would be non-oral, Gram-positive, and related to neither B. subtilis nor P. aeruginosa. Furthermore, even if we do not use a spike-in, we should strive to understand the biases in the extraction, amplification, and sequencing steps for different oral microbes. We may need to start the process by examining the extraction efficiencies of single-cell cultures, or in the absence of such a possibility, of co-cultures with known or easily characterizable strains. The results of these efficiencies could then be used to mathematically correct for relative abundance data, though ensuring the validity of this approach requires extensive proof of repeatability from one extraction-amplification sequencing trial to the next. The current dearth of research regarding the preservation of oral microbes may have originated from a perceived lack of need. Since facile identification of oral microbes has been difficult until high-throughput sequencing became viable, storing complex microbial communities for model-building and future study would not have been a reproducible or efficient approach. One of the few examples of examining the effect of preservation on oralbacteria compared both storage and transportation methods for the human supragingival dental plaque. The results showed that freezing dental samples in transport media without cryopreservation reagents led to no substantial differences between 48- and 72- hour storage for either S. sanguinis or S. mutans, though the survival rates of viable bacteria in frozen samples were predictably much lower than the those in samples stored at temperatures above freezing.

The opinion of the research community on rarefying microbiome data seems rather divided

Cultures cluster close to one another while liquid samples show large inter-sample variation both before rarefaction and after . While rarefaction changed both the total and coordinate-specific amounts of variation, it did not do so to a remarkable extent – 97.2% to 97.5% and 89.4% to 88.5% for total variation in spiked and unspiked, respectively, and below 5% in all cases for individual coordinate axes in all cases. On the other hand, removing the spike-in OTU from the read counts did substantially change the clustering as well as the x/y positioning of both the liquids and cultures. Even a cursory comparison of the left column in Figure 15 to the right column in the same figure shows that the E. coli OTU affected the apparent similarities of the samples shown by PCoA. A few finer points should be made here about removing this process. First, removing this OTU pulled the liquids and cultures together; whereas the first coordinate spanned from -0.5 to 0.6 before removal, it spans from about -0.3 to 0.6 afterwards, indicating a tighter grouping for all samples. Second, as might be expected, removing this OTU allowed the compositional variations in the liquid samples to surface , whereas the liquids were clustering according to whether they had received the spike in before removal . The liquid samples were clearly more compositionally diverse than the cultures, evidence by the wider horizontal spread of the triangles in Figures 15 b and d. Third, removing the E. coli reads did not greatly affect the grouping of the cultures. Culture samples still fall within 0.2 units of one another in both coordinate axes, with one exception of a culture spiked by 100µL of E. coli. Fourth, after removal of the E. coli OTU, blueberries in pots neither liquids nor cultures cluster with visibly discernible patterns that group with spike volumes anymore, whereas the grouping of samples with spike volumes was apparent before removal .

Overall, these clusters and their disappearances fall well within expectation, considering that the spike-ins were much higher in biomass than the liquids but much lower than the cultures. The difference in biomass between the liquids and cultures inevitably led to the increased propensity of the liquids to similarity/dissimilarity influences from the spike-in. For both liquids and cultures in the preliminary experiments, it seems that as expected, no distinct groupings based on inherent compositional dissimilarities can be observed. It is also not clear whether the variations in the liquid samples come, in large part, from the low numbers of read counts after E. coli removal, as even a few reads in a low-read-count sample could lead to seemingly large inter-sample differences. In any case, the total percentage of variation accounted by PCoA here falls between 89% and 98%, indicating that two axes were sufficient for this set of samples. Interestingly, removing the E. coli OTU decreased the total percentage of variation accounted for by more than 8%, once again underscoring the sway that the spike-in had on liquid samples. From the compositional analyses, we see that cultures in the preliminary experiments yielded sufficient biomass and contained dental plaque bacteria, without exhibiting unexpected similarities or dissimilarity with themselves or with the liquids above the sedimented cultures. As to what the principal coordinates represent, i.e. what underlying biological differences may have led to two coordinates being sufficient, we would need to adopt a different analytical approach, which we do in the next stage of the project.In these preliminary experiments, we established a culturing procedure that minimizes external contamination while producing high numbers of viable cells from the human oral/dental bacterial community. Compositional analysis of the cultures showed that OTUs with the highest relative abundances belong to the Neisseria, Streptococcus, and Veillonella genera, two of which have been shown to be early and middle colonizers of the oral microbiome and all three of which have been shown as core genera in the supragingival plaque community.

The prevalence of OTUs from commonly occurring oral bacterial genera confirms that the culturing conditions support the growth and proliferation of anaerobic oral microbes without resorting to traditional, closed-form anaerobic culturing techniques such as anaerobic agar. Not many members from the group of later colonizers were present in the cultures in the preliminary experiments, though an Eikenella OTU was cultivated in the in vitro oral community to the extent of having a visible relative abundance value . Previous research has show that members of this OTU belong to groups of later colonizers that also include Actinomyces spp., Capnocytophaga ochracea, Propionibacterium acnes, and Haemophilus parainfluenzae. In this case, the absence or low abundance of later colonizers is not surprising, given that the cultures were incubated for less than 24 hours and not replenished with fresh host plaque. The short incubation time and lack of re-inoculation are part of the widely known scientific truth that in vitro conditions frequently select for organisms that can survive without the rich and complex environment of the original host. For bacteria that come from humans, this truth holds even more weight because it is unfeasible to replicate the human oral cavity. The complexity of host-microbe interactions simply defies reproduction in the lab. Another aspect to consider regarding the lack of later colonizers in these cultures is that membership of the oral bacterial community can vary greatly across different hosts. Kolenbrander and coworkers presented a larger picture of all the organisms that generally colonize earlier or later, with results that implicated trends, in other words, approximate orders of succession of oral/dental bacteria instead of definite lines of succession, and their work is far from the only instance for which human microbiome compositions have shown such great inter-host variations. The oral microbiome is no exception to such variation, but there exists a core community of major genera, and our methods have captured members of these major genera .

However, the factors already mentioned as having possibly detracted from organismal diversity in the in vitro cultures can be mitigated in future experiments by periodic re-inoculation of the cultures, longer incubation times, and/or variable nutrient sources and concentrations. These changes may help meet nutrient and signaling requirements of more fastidious bacteria, as well as increase the density of cells from certain OTUs to beyond their threshold values, such that proliferation becomes possible. Some of the culturing conditions that seemed appropriate for a proof-of-concept, such as this set of experiments was intended to be, including surface hydrophobicity of the scaffold, sampling with consideration of the growth phase of the bacteria, and bacterial attachment. The results seemed to indicate a promising protocol to establish an in vitro plaque community. An aspect that deserves some special, detailed consideration is the formulation of the culturing medium, SHI, based on the work from Tian and coworkers . As we used this medium in the preliminary experiments, it provided adequate nourishment, particularly in terms of pH and ionic strength. A potential disadvantage of SHI is that it is considered an undefined culture medium because the major carbon source in SHI is porcine stomach mucin. This glycoprotein is supplied in a partially purified form, square plant pots and because the glycosyl modifi- cations on glycoproteins can varied greatly depending on the conditions in the source organism, the composition of this protein cannot be guaranteed to be entirely biochemically identical across batches. Interestingly, the undefined nature of this medium has not yet been reported to be a major obstacle. On the contrary, research has shown some evidence that this medium may outperform more defined medium. A study that com-pared the effects of two media, DMM vs. BMM , on dental plaque microcosms grown in an artificial mouth system showed that plaque growth was slower in the chemically defined DMM, which contained higher concentrations of choline, citrate, uric acid, haemin, pyridoxine, biotin, and cyanocobalamin but lower concentrations of inositol, menadione, niacin, pantothenic acid, thiamine, and riboflavin. Furthermore, enzymatic activity for DMM was lower or in some cases undetectable. The results of our preliminary experiments indirectly affirm those from the comparative media experiment – we saw fast growth with the SHI medium, which contains major components from BMM as well as supplements such as menadione. However, we did not test the enzymatic activity of the cells in the culture to ensure that it is at least detectable, and we may need to do so. Another potential improvement might be to make the medium more defined for the sake of repeatability in our lab and reproducibility in the community. There has been some evidence that an artificial saliva may substitute human saliva in the growth of streptococcal species, and the composition of this artificial saliva may be a good starting point for a defined medium that would also be nutritionally sufficient. With regards to the attempt at establishing an internal standard with a known E. coli strain, we found that it was not feasible to seek correlations between read counts and OD600 values or CFU/mL under the conditions in the preliminary experiments. Finding such correlations mathematically would require quantifying and optimizing additional steps in the sequencing process.

The key steps to optimize here would include setting a concentration of E. coli DNA to be spiked into the samples to be sequenced, rather than using cells as spike-ins; understanding the efficiency of DNA extraction and mitigating the somewhat common bias of extraction processes to preferentially yield more DNA from Gram-negative bacteria; quantifying and optimizing the efficiency of PCR for the 16S rRNA of samples, which may involve some minor primer modifications; quantifying the composition of the library to be sequenced, potentially using genus-specific primers;quantifying how the fixed sequencing depths of HTS platforms affect the read counts and apparent compositions of samples, especially when samples do not have the same biomass; and so on. The quantification and optimization of these steps, including a detailed understanding of how systematic errors mathematically affect the results and how the number of discarded low-quality sequences affect the apparent compositions, may then enable us to find empirical relationships between number of cells in the cultures and read counts from sequencing. Gaining such a great degree of control over the whole process was not feasible at the time but would be a worthwhile venture for a future project. If we can establish a facile and rapid approach to quantification and optimization, we may be able to propagate the approach to developing many such numerical, analytical protocols. The bio-informatics process used for the preliminary experiment seemed to have served its intended purposes. With this process, we were able to perform quality control on the reads and cluster reads into OTUs at a reasonable level of sequence identity . More importantly, this procedure did not produce apparent artifacts that affected the processing and interpretation of data. The results obtained from using this bio-informatics pipeline met expectations formed from existing research on the human oral microbiome, though an aspect that may merit further consideration is the standardization of sample size by rarefaction. While there is some evidence that rarefaction helps reduce false discovery rates, there is equally reasonable evidence that rarefaction omits valid data and may bias against rare OTUs. For the purposes of these preliminary experiments, we have shown that rarefaction to 20,000 reads does not produce obvious artifacts or detectably reduce features observed in non-rarefied samples. Given that one of the major goals of the bio-informatics analysis in these experiments was to establish a procedure capable of distinguishing between biologically distinct samples without introducing much bias, rarefaction was clearly a defensible part of our approach. As for PCoA, the observation that rarefaction increases the percentage of variation accounted for is an expected result because of the nature of rarefaction. Rarefaction is a standardization procedure that simultaneously equalizes sample sizes and reduces the inter-sample variation, especially for samples with high numbers of rarer OTUs. To understand this point, we need to consider the two foundational concepts of diversity: richness and evenness. In terms of richness, adding a single OTU to a sample increases richness by one, which would only change diversity greatly in samples with low numbers of OTU. As for evenness, the addition of one OTU to a sample may or may not lead to a dramatic change in diversity, depending on two major factors.

The current system also assumes a direct single-channel EEG electrode connection to the ADC input

The Results section covers the performance comparison of XGBoost with CNN and the deployment system performance evaluation. Finally, we conclude our work and discuss possible future directions in the Conclusions section.The study of brain activity using electroencephalogram typically involves extracting information from signals associated with certain activities. In recent years, machine learning techniques have been applied to the classification of mTBI because it enables the extraction of complex and typically nonlinear patterns from the EEG data. Most of the work surveyed used rule-based techniques, such as k-Nearest Neighbors . Previous investigations have studied a variety of classification techniques, including classical machine learning such as SVM and deep learning such as Convolutional Neural Networks . These techniques have been shown to perform TBI classification with more than 80% accuracy. However, in most investigations we reviewed that implement machine learning for TBI detection, the primary focus was the study of classification techniques and performance of classification models rather than portable deployment. A few systems used a small, portable computer for deployment in some form. The Neuroberry platform used a Raspberry Pi 2 device to capture EEG signals but the focus was on enabling EEG signal availability on the Internet of Things domain. The Acute Ischemic Stroke Identification System utilized an Analog to Digital Converter front end with Raspberry Pi 3 to capture physical EEG signals. However, plastic plants pots this system transferred the captured data to an HPC running MATLAB for signal analysis and processing and did not focus on signal classification.

Zgallai et al. described a Raspberry Pi-based system that used deep learning to perform EEG signal classification. It was designed to identify a subject’s intended movement direction from a multichannel EEG signal to control wheel-chair movement in a closed-loop robotic system rather than as a general system for identification, analysis, and monitoring of a physiological condition such as mTBI. Bruno et al. highlighted challenges with existing medical diagnosis techniques and described a classification system from the perspective of real-time TBI diagnosis, but their work was focused on the algorithm to perform TBI diagnosis and not on the implementation of a deployment system. In our previous work, we developed and described a CNN based model to perform automated sleep stage scoring and mTBI classification. In addition, we did a limited deployment of the CNN model on a Raspberry Pi 4 system. In that work, the focus was on describing the CNN model configuration, evaluating its performance, and showcasing that deployment to RPi was feasible rather than designing a complete, portable classification system. We have reused the previously developed CNN model in the current work to provide a baseline performance comparison with a new XGBoost model developed for this work. Further, the two models enable us to demonstrate the versatility of the current system to operate with multiple types of predictive models. To the best of our knowledge, no standalone, portable system has yet been created using Raspberry Pi that can capture real-time EEG signals, detect the presence of mTBI, and classify mTBI sleep/wake epoch states.A previously published dataset as described in [3] was used to train and evaluate deployed models. This dataset was collected as part of a study involving 11 adult male mice subjects divided into two groups—mTBI and Sham. FPI procedure was used to induce mTBI in 5 subjects and the remaining 6 mice were used as Sham subjects.

To capture the EEG signal, three ball-tipped electrodes were placed in the skull of each subject, two frontal and one in the parieto-occipital region. In this work, we proposed and demonstrated an RPi based EEG acquisition, processing, and classification system for early mTBI detection. This system was implemented using a single channel EEG data obtained from mice. This system was demonstrated to operate in a portable, real-time, and standalone configuration and perform classification of real-time EEG epochs into four target classes . As shown in Table 1, the accuracy, precision, and recall results were identical across RPi and HPC. This confirmed that the predictive model behavior did not change when the training and deployment systems involved different system architectures, i.e., x64 based MacOS/Windows HPC for training vs. ARM-based RPi for deployment and prediction. Hence, it is possible to train a predictive model on a more powerful computer and deploy it to an embedded device such as RPi that has limited memory and processing resources. This is especially applicable to multilayered neural networks like CNN that typically have long training times on an HPC, and the training times would be prohibitively long on an embedded device like RPi. We calculated the epoch processing time on RPi by varying the number of epochs, as shown in Figure 4 and described in Table 2. While it was expected that the processing time would increase as the number of processed epochs is increased, the key inference was that the processing time was considerably smaller than the time required to collect the EEG epochs. At 256 Hz sampling rate and 64 s epoch size, the processing time ranged from 0.01% to 0.03% of the epoch collection time. Hence, we concluded that the system had ample time to process previously captured EEG epochs while new epochs were captured at practical EEG signal sampling rates. We employed two different approaches for supervised learning models used in this system, the CNN model developed in our previous work, and an XGBoost predictive model created in the current work. We compared classification metrics and performance of the XGBoost and CNN models on the deployment system as well as an HPC.

We observed that the XGBoost model exhibited better performance in terms of accuracy and inference time compared to the CNN based predictive model. In the case of XGBoost, the variation of inference time remained roughly within 2 µs between HPC and RPi. A low inference time was critical for the real-time operation of the classification system. One possible reason for the better accuracy performance in the case of XGBoost compared to CNN was that the classification model for XGBoost was created using hand-crafted features which enabled learning differentiating patterns for the four target classes better than the CNN model that automatically extracted the differentiating features. These results, however, were data-dependent, so they should be validated on different datasets to verify the generality of the model. We found that overall, XGBoost was better suited for deployment on RPi because of its faster inference time and better performance than CNN. By using two different predictive models for classification, we demonstrated the flexibility of the system to deploy improved classification models in the future. In this system, we used a DAC to generate EEG signal waveform form European data format files. This provided a reliable way to generate an EEG signal waveform without requiring an actual subject to capture the EEG signal from. We verified that the EEG waveform generated using the DAC on RPi was consistent with the EEG data stored in the EDF file. The verification was done by calculating MSE across the stored and generated signal, which was found to be 0.26, a small value indicating that the generated signal represented the stored signal accurately. Synthesizing EEG signals to replicate the complex and typically nonlinear signal patterns is challenging and the ability to generate EEG signals from an actual recording data file using a DAC simplifies the setup that is required to test an EEG classification deployment system hardware and software chain. It enables the use of several available open-access EEG data files to train classification models and test the deployment system. For future use, the signal generation capability of this system can be simplified for ease of use and expanded to work with a variety of EEG data file types. This can help accelerate mTBI related future research pertaining to portable classification systems that are often constrained by the lack of readily available live EEG signals to test a hardware classification system. In addition to early mTBI detection, blueberry pot the capability of the system to perform live classification on input EEG signals can be extended to cover mTBI related health and sleep monitoring applications in the future. Typically, after the initial diagnosis, TBI patients undergo EEG sleep monitoring in a hospital setup. A portable EEG sleep monitoring system, such as the one described in this work, can enable a subject to self-monitor in home settings and greatly enhances the accuracy, efficiency, and efficacy. The classification system developed in the current work can also provide a replacement of the labor-intensive manual sleep-stage scoring of EEG signals by human experts with an online and automated system with the capability to perform fast sleep staging. Further, our technical approaches can be extended to several other EEG applications, including detection of the onset of epileptic seizures, strokes, and other neurological conditions.

In this work, we used a relatively simple hardware system to capture and digitize EEG signals, which could be improved. Because we generated EEG signals from a datafile containing clean EEG data, this hardware did not include amplification and filtering stages. A practical system designed for field use would require additional hardware and software capabilities to capture and process EEG signals in real-time. We also used a relatively simple metric for comparison of generated and stored EEG signals. While we only used MSE as a metric for this system, for cases where components in the signal path could potentially cause phase changes in the signal, MSE should be coupled another metric such as cross-correlation to verify signal integrity. In terms of hardware, such a system would require amplification, preprocessing, and filtering stages. In software, decimation, normalization, Independent Components Analysis , physiological artifact removal , and filtering stages can be implemented. Further, we used an 8-bit ADC for this proof-of-concept system, but for devices designed for practical use, ADCs typically vary from 16-bit to 24-bit resolution. For example, the OpenBCI Cyton Biosensing system for sampling EEG and other physiological signals uses a 24-bit ADC. We will note that higher resolution ADCs also involve a relatively higher cost and have lower sampling rates as the number of resolution bits increases. In addition, the system in this work was designed for single-channel EEG generation and capture, which limits its use for multichannel EEG applications. It does not directly provide connectivity to wireless EEG headsets. However, several “hardware attached on top” devices are available for RPi, for example, the brain HAT, that makes it possible to connect wireless headsets seamlessly and we anticipate the system in this work to function as intended with the actual streaming EEG data outside the particulars of EEG headset interfacing.Horticultural crops have high economic, and enrich our lives through their aesthetic and nutritional value. Many horticultural species originate from tropical regions and are sensitive to cold at every stage of their lifecycle. Cold stress leads to lower productivity and post-harvest losses in these species, with poor economic and environmental outcomes. Better understanding of the protective mechanisms mediated by hormonal and other signaling pathways may offer solutions to reduce cold-stress induced losses. The papers included in this collection illustrate this concept, examining natural cold-tolerance mechanisms and practical ways for growers to alleviate chilling stress and to reduce crop losses. The studies were remarkably diverse in terms of the species studied , plant organs examined , and approaches used . The papers encompassed the use of basic science, aimed at identifying key genes and their roles in cold signal transduction and protective pathways in fruit and photosynthetic tissues; reverse genetics for proof-of-concept on the hypothesized role of a cold-tolerance transcription factor cloned from an understudied species; and emerging technologies, by using exogenous hormones and signaling compounds to mitigate the harmful effects of chilling. These studies are described below.C-repeat binding factor proteins constitute a transcription factor subfamily known to play a key role in plants against different types of abiotic stress including cold, heat, salinity or dehydration, and thus have been extensively studied. Overexpression of CBFs has been used for the development of genetically modified plants with enhanced stress tolerance and for the investigation of the molecular mechanisms underlying plant stress responses. Using this approach, Yang et al. found that overexpression of three newly identified longan CBF genes enhanced cold tolerance in Arabidopsis by increasing the content of the osmoprotectant proline, reducing the accumulation of reactive oxygen species , and stimulating the expression of cold-responsive genes.