Category Archives: Agriculture

An understanding of the depth to the groundwater table is also needed

As is the case with any model, and with soil survey information in particular, ground-truthing at the field scale is necessary to verify results. We acknowledge limitations to our model. It does not consider proximity to a surface water source, which is an issue especially in areas that are irrigated solely from groundwater wells and are not connected to conveyance systems that supply surface water. The SAGBI also does not consider characteristics of the vadose zone or depth to groundwater. In arid regions, deep vadose zones may contain contaminants such as salts or agricultural pollutants that have accumulated over years of irrigation and incomplete leaching. These deep accumulations of contaminants could be flushed into the water table when excess water is applied during groundwater banking events. Furthermore, deep sediment likely contains hydraulically restrictive horizons that have not been documented, creating uncertainty as to where the water travels.Given these issues, SAGBI may be most useful when used in concert with water infrastructure models and hydrogeologic models — which generally do not incorporate soil survey information in a comprehensive way — to develop a fuller assessment of the processes and limitations involved in a potential groundwater banking effort.Selenium received recognition as an environmental contaminant in the 1980s,procona system as a result of the unprecedented events at the Kesterson Reservoir in California , a national wildlife refuge at the time . Large amounts of this trace element had been mobilized through irrigation of selenium-rich soils in the western San Joaquin Valley, transported along with agricultural runoff, and accumulated at the Reservoir.

Toxic selenium concentrations brought about death and deformities for as much as 64% of the wild aquatic birds hatched at the reservoir, including both local migratory species. Within a few years, the habitat of a variety of fish and waterfowl was classified as a toxic waste site . Today, the Reservoir’s ponds are drained and covered beneath a layer of soil fill , yet the mechanisms of selenium release now known as “the Kesterson effect” are still a threat in California and around the world . The environmental and management conditions creating irrigation-induced selenium contamination have been characterized in Theresa Presser’s seminal work . In brief, problems arise when seleniferous soils, such as those formed from Cretaceous marine sedimentary deposits along the Western side of the San Joaquin basin are subjected to irrigated agriculture. Salts, including selenium, naturally present in such soils are mobilized through irrigation, and high evaporation rates concentrate them in the root zone. In order to avoid negative effects on plant growth, subsurface drainage systems are used to export excess salts from the soil. This is particularly necessary in places where deep percolation is inhibited by a shallow impermeable layer. Such subsurface runoff routinely contains selenium in concentrations that exceed the US Environmental Protection Agency designation of toxic waste and thus poses an acute threat to aquatic ecosystems that receive it . The irrigation runoff feeding into the evaporation ponds of the Kesterson reservoir averaged 300 µg Se/L . The discovery of widespread deformities among waterfowl hatched near these ponds in 1983 led to a shift in the perception of selenium. While research had thus far been focused on farm-scale problems related to crop accumulation and toxicity to livestock, it became clear that excessive selenium concentrations in agricultural runoff was a watershed-scale resource protection issue that would greatly complicate irrigation management throughout the Western United States . As a result, California has been a hot spot for global research and management of environmental selenium contamination .

As selenium load management in the San Joaquin basin has made significant progress, new major sites of concern, such as the San Francisco Bay-Delta and the Salton Sea , have emerged in California. Current regulatory standards for selenium as aquatic contaminant are insufficient to be protective of sensitive ecosystems because they do not account for amplified exposure through bio-accumulation . There are many other pathways of anthropogenic selenium contamination – the San Francisco Bay-Delta for example receives half of its input from refineries . However, the diffuse agricultural sources are particularly hard to control , are the principal source of selenium in western US surface waters , and have shaped California’s history like no other selenium source. This paper analyzes what can be learned from the last three decades of seleniferous drainage management and regulatory approaches developed in California. In particular I seek to answer two key questions: 1) What were the greatest achievements and shortfalls of seleniferous drainage management in California? 2) To what extent may the current development of site-specific selenium water quality criteria for the San Francisco Bay and Delta serve as a model for future regulation?Selenium is a naturally occurring trace element heterogeneously distributed across terrestrial and marine environments . On land, seleniferous soils and those marked by selenium deficiency sometimes occur as close as 20 km from one another . Selenium contamination of natural ecosystems is linked to an array of human activities including irrigated agriculture, mining and smelting of metal ores, as well as refining and combusting of fossil fuels. The bio-spheric enrichment factor, which is computed as the ratio of anthropogenic to estimated “natural” emissions of a substance, was found to be 17 for selenium , highlighting the dominance of the anthropogenic component in the modern selenium cycle . Anthropogenic fluxes are expected to keep increasing in the foreseeable future as energy and resource demands increase . Selenium bio-accumulates, with tissue concentrations in animals and plants typically 1-3 orders of magnitude above those found in water.

Consequently, the predominant selenium uptake pathway for animals is through the consumption of food rather than water. Bio-accumulation and bio magnification are particularly intense in aquatic ecosystems and selenium contamination of such habitats is a global concern . In the Western United States alone, nearly 400,000 km2 of land are susceptible to irrigation induced contamination by the same mechanisms that led to the demise of the Kesterson Reservoir . Other nations where irrigation induced selenium contamination has been observed include Canada, Egypt, Israel, and Mexico . The environmental impacts of selenium depend on the element’s chemical speciation. The element’s primary dissolved forms, selenate and selenite , are mobile and bio-available . They can be sequestered in soils or sediments upon microbial reduction to solid elemental Se, metal selenides, or volatilized to the atmosphere upon reduction to gaseous methylated Se. Both selenate and selenite are toxic at elevated concentration,procona valencia buckets selenite however was found to be more toxic than selenate in direct exposure studies involving invertebrates and fish and also to bio-accumulate more readily at the base of aquatic food chains . Additionally, once any dissolved form of selenium is assimilated by an organism it is converted into highly bio-available organo-selenide species, Se . Exposure studies comparing organo-selenides to selenite in the diets of water birds established lower toxicity thresholds for the former . Organo-selenides are released from decaying organisms and organic matter during decomposition and can then persist in solution or be oxidized to selenite, while the conversion back to selenate does not occur at relevant rates in aquatic environments . Thus, recycling of selenium at the base of aquatic food webs through assimilation and decomposition usually leads to a buildup of the more bio-available and toxic forms over time . This buildup of bio-available selenium species may also explain why tissue concentrations in the upper trophic levels of stagnant or low-flowing ecosystems typically exceed those of fast flowing ecosystems with comparable selenium inputs, but shorter residence times . The complex environmental cycling of selenium has been a major obstacle in creating water quality regulations for this element . Regulatory concentration guidelines vary widely between jurisdictions and there are significant opportunities for new regulatory approaches . The Californian office of the EPA is currently working on site-specific water quality criteria for the protection of wildlife in the San Francisco Bay and Delta . These criteria are to be based on a modeling approach developed by USGS scientists, capable of translating tissue limits to dissolved concentration limits . There is hope among aquatic toxicologists that California’s new site-specific approach may become a model for national standards . For all contaminants regulated since 1985, aquatic life criteria under the Clean Water Act have been defined through separate dissolved concentration limits for longer term “continuous” and short term “maximum” limits . The selenium criteria that were established in 1987 defined continuous concentration limits of 5 µg/L as acid-soluble selenium with maximum concentrations not exceeding 20 µg/L more than once every three years for freshwater environments, but allowed up to 71 µg/L with up to one three-year exceedance of 300 µg/L for saltwater environments.These selenium limits became legally binding for 14 states including California after promulgation with the 1992 Water Quality Standards.

A central problem with the current criteria is that they were predominantly based on data drawn from direct exposure laboratory studies and thus failed to take into account the more ecologically relevant toxic effects due to bio-accumulation and trophic transfer. The freshwater criteria were based on field data from a contamination event , while the saltwater criteria were purely based on laboratory studies which did not account for bio-accumulation. The resulting difference of more than one order of magnitude between fresh- and saltwater criteria is not supported by field data . In fact, the saltwater criteria have widely been regarded as under protective of wildlife, including waterfowl . In addition, the freshwater criteria appear under productive of particularly sensitive ecosystems and species . To be protective of waterfowl in the wetlands of the Central Valley Region, a 2 µg Se/L monthly mean water quality criterion was deemed necessary by the Regional Water Quality Control Board and this objective was officially approved for the region by the EPA in 1990 . For the wetlands of the Central Valley Region, this criterion overrides the statewide criteria promulgated in 1992 and remains in effect today. However, given the wide range of bio-availability between different selenium species and the complex transfer processes between environmental compartments and trophic levels, regulation based solely on dissolved or acid-soluble concentrations has been characterized as inadequate . In response to such criticism the EPA proposed in 2004 a new tissue-based criterion for selenium with a 7.91 µg/g fish tissue limit to supersede the previous national water quality guidelines for selenium. This limit is based on the lowest level of effect in juvenile bluegill sunfish under simulated overwintering conditions . Whereas there is little doubt that tissue concentrations are more representative of exposure than dissolved concentrations for individual species, it is unclear if a single fish tissue limit will be protective across entire food webs including a diversity of fish and waterfowl . The proposed tissue based criteria have to date remained at draft stage due to objection by the US Fish and Wildlife Service. The historic developments that lead to the rise of selenium contamination in the San Joaquin Valley can be traced to the passage of the California Water Resources Development Act of 1960. The Act laid the financial foundation for the State Water Plan providing for the construction of the nation’s largest water distribution system and including also infrastructure measures for “the removal of drainage water” . The State Water Projects funded under this plan began delivering water to 4,000 km2 in the Southern San Joaquin Valley as of 1968 . To prevent salinization and manage agricultural runoff, the Bureau of Reclamation constructed collector drains, a main drainage canal , and a regulating reservoir, Kesterson . Originally, the San Luis Drain was planned to deliver drainage out of the San Joaquin Valley all the way to the San Francisco Bay Delta, however the northern part of the drain was never completed . Instead, from the time of the San Luis Drain’s completion in 1975 until its temporary closure in 1986, all runoff water channeled through the drain was delivered to the evaporation ponds of the Kesterson Reservoir, which had become part of a newly created national wildlife refuge in 1970 . There, in the early 1980s, high rates of embryo deformity and mortality, as well as large numbers of adult deaths among waterfowl were identified as caused by the elevated selenium concentrations in the evaporation ponds . This led to the closure of the Reservoir to all runoff inputs in 1986 .

Poor knowledge of winds at the field scale also represent a significant limitation

As field size increases, the length of time required to move a packet of air from one side of the field to the other will increase, decreasing the probability that wind speed and direction will remain relatively constant. Furthermore, as the moisture content increases down wind, this would decrease vapor pressure deficit, potentially reducing rates of ET downwind. Another explanation, suggested by that fact that some crops showed a positive correlation between LSTand slope, is that rather than advection of plant-transpired moisture downwind over individual fields, there is instead an accumulation of water vapor over the field. This idea will be explored further in Section 4.1.3. Second, we did not find positive correlations between GV fraction and water vapor slope as postulated in Hypothesis E. If green vegetation is transpiring and adding to the water vapor above a field, we would expect higher fractions of GV to contribute more water vapor, and thus increase the size of the gradient. We found no correlation between water vapor slope and the GV fraction, even when results were segmented by field size and GV fraction. We used 50% GV as the cutoff to demarcate sparsely vegetated fields from highly vegetated fields, as is consistent with previous studies. However, we found that the average fractional GV coverage of fields that showed good alignment between wind direction and water vapor directionality was around 45%. Therefore, future studies may want to consider a lower GV threshold or a segmentation of fields into multiple GV classes. Finally, we did not find an inverse correlation between water vapor slope and LST in support of Hypothesis G. Either no correlation was found, drainage collection pot or highest water vapor slopes were found with higher temperature crops.Water vapor patterns were as expected at the field level, in response to wind.

However, water vapor patterns were not as expected in response to the surface properties of field size, GV fraction, and ET rate as expressed by field-scale LST. We had hypothesized that field-level water vapor slopes can be used to infer crop transpiration, but did not find evidence supporting that hypothesis. Rather, our results suggested that water vapor accumulation from transpiration was more dominant than the advection signal at the field level. The rate of ET has been found to remain constant with downwind distance across a field, even if warm, dry air is being advected toward a vegetated field. If plants are transpiring at a constant rate and winds are not strong enough or stable enough in directionality to evenly advect the moisture, the concentration of water vapor above the field would increase relatively evenly throughout the field, leading to a diminished slope. Crops are also more aerodynamically rough than an empty soil field, and the resultant turbulence caused by vegetation creates eddies and atmospheric mixing that may muddle signals of field-level advection discernable above smoother landscapes. The hypothesis of water vapor accumulation is supported by results that found a positive relationship between LST and slope for some crops, a negative relationship between field size and slope, and a weak positive correlation between water vapor intercept and GV fraction in 2013 and 2015. Therefore, the results of this study lead us to new conceptual understanding that the magnitude of water vapor as assessed though the intercept of a fitted plane may be better indicator of ET than the slope. However, underlying heterogeneity of the landscape and scaling issues, as discussed below, prohibited isolated analyses of intercepts in this study area.There is error within all water vapor estimates regardless of which retrieval method is used, and the estimates vary significantly from model to model. However, Ben-Dor et al. found that, of six different water vapor retrievals, ACORN estimated water content with acceptable accuracy and, importantly for our study, it was one of only two models that accurately discriminated water vapor from liquid water in plants.

Therefore, the positive correlations found in years 2013 and 2015 between water vapor and vegetation fraction are assumed to be a product of coupling between the landscape and the atmosphere, rather than an artifact of the retrieval.Wind direction and magnitude can change significantly within a small period of time, making estimations of wind within the study scene at the time of the flight particularly difficult. Furthermore, a sparse network of meteorological stations, may not accurately capture more local variation in wind between the stations. Thus the IDW wind field we used in this study may not adequately characterize fine spatial or temporal variability in winds at the field scale.Unlike Ogunjemiyo et al. who studied water vapor over a relatively homogeneous area of transpiring poplar trees, this study evaluated water vapor as it varies across a very diverse agricultural landscape with many different crop species, green vegetation cover, and irrigation regimes. As such, Ogunjemiyo’s conceptual model illustrated an ideal relationship between water vapor and vegetation at the field-scale that may not hold in our complex study area. First, interactions between water vapor occurring over two diverse, adjacent fields may alter the vapor deficit and stomatal response of a single crop field and result in water vapor trends that do not follow Ogunjemiyo’s model. The schematic in Fig 15A illustrates one possible interaction in which a transpiring field is upwind of a non-transpiring field. While the transpiring field will act as hypothesized with the slope and direction of a fitted plane in line with the wind direction, a plane fitted to the fallow field downwind will likely show a slope that is opposite in direction to the wind. The wind carries moist air from the vegetated field onto the fallow field, leading the upwind edge of the fallow field to have higher water vapor concentrations than the edge that is downwind. In the case of the downwind area being another highly transpiring field , the moist, advected air from the upwind field may reduce the transpiration rate of the downwind field at the boundary by decreasing the vapor pressure deficit.

This may lead to an exaggerated water vapor slope over the downwind field. The accumulation of water vapor from one field can therefore lead to shifts in vegetation response that are difficult to account for. Fig 15C illustrates the scenario where a dry, fallow field is upwind of a transpiring field. If the area upwind of a vegetated field is fallow, we would expect the saturation deficit of the dry advecting air to increase the evaporation rate at the boundary unless the vapor pressure deficit is high enough to initialize stomatal closure. A higher ET rate at the upwind side of the field will lessen the expected, observable trend of advection across the field. The transpiration response will be species-dependent. Second, not all fields will interact with the atmosphere in the same ways, due to differences in aerodynamic roughness, affected by row spacing, plant height, plant size, orientation, and composition. The aerodynamic roughness of a field will influence how effectively and at what height the transpired water vapor will mix with the atmosphere. Agricultural fields may differ strongly in aerodynamic roughness, and these differences will lead to deviations from the hypothesized water vapor slope and intercept patterns as they vary with crop type. Therefore, we would not expect all fields to show the same relationships between water vapor, wind, and estimated transpiration rates. We would expect aerodynamically rougher surfaces, such as orchards,snap clamps for greenhouse to generate greater turbulence, generate mixing higher up in the atmosphere, and show greater coupling with the wind than row crops . Depending on the wind speed, orchards may show higher or lower slopes than row crops if their vapor patterns are more tied to wind patterns. In contrast, shorter and smoother row crops such as alfalfa will be less coupled to the atmosphere . Because crops such as orchards are more closely coupled to the atmosphere, they may be more appropriate to study with water vapor imagery. Therefore, isolating the effects of neighboring fields would be beneficial for field-level water vapor analyses, but this was not logistically possible in our study. The study area is a high-producing agricultural area where most fields are bordered by multiple neighbors of varying GVcover, crop type, size, physical characteristics that influence roughness, and ET rate. Further, without LiDAR data from which physical characteristics such as orientation, height and structure could be obtained, it was not possible to model field-scale differences in aerodynamic roughness in this study. This work has aimed to enhance understanding of the impact of GV fraction, field size, crop type and water use on patterns of water vapor.

Positive findings include the presence of significant vapor gradients over most fields, and regional patterns in water vapor that are consistent with advection. High water use crops also showed a disproportionally higher level of agreement between interpolated wind direction and the direction of water vapor gradients. Field size impacted water vapor slope, although slopes were higher in smaller fields than larger fields, in contrast to expectations. We suspect improved knowledge of winds at the field scale, would improve our ability to interpret water vapor gradients. For example, given that a majority of the fields showed statistically significant water vapor slopes, an alternative hypothesis may be that those gradients better represent winds at the field scale, than interpolated winds from a sparse network of stations. Finally, we found the intercept of the best-fit surface for water vapor over a field to be more significant than the slope, suggesting that water vapor is accumulating over fields, rather than advecting.Water vapor imagery shows patterns of vapor that are highly variable through space and time and that hold valuable information about land-atmosphere interactions. We suggest there is considerable potential for this imagery and explored some of this potential here.To further scientific understanding of water vapor imagery analysis, further studies are necessary to refine observation and quantification of land-surface interactions as the signal is highly complex and is affected by many factors. While water vapor imagery could potentially be used to parameterize models of land-surface interactions, additional studies in a diversity of landscapes are necessary to define the conditions and scales at which this imagery can be used. Almost 4,000 AVIRIS images have been collected since 2006 and are available for public download. With such a large repository of data collected at different time points, under varied atmospheric conditions, and over diverse surfaces, future research could tease out the conditions under which interactions can best be observed in a more comprehensive way than this study of three snapshots in time could. Further, with future remote sensing missions such as SBG, which will collect hyperspectral imagery at moderate spatial resolutions and enable column water vapor estimates globally, these data streams can be exploited for comparisons of water vapor over large agricultural areas worldwide. These large archives of water vapor observations can also act as a compliment to models that estimate water vapor and plant water use by providing validation data. In addition to increasing analysis of similarly complex scenes, future studies would benefit from additional data sources that could isolate the signal of water vapor and validate its link to the surface. Such controls include on-site continuous wind measurements, flux tower measurements of ET, and/or more spatially comprehensive wind data. On-site wind data and ET measurements at a high temporal resolution would both validate trends seen in the water vapor imagery and assist in pinpointing the appropriate temporal scale and time of day for which this analysis is best suited. A mesoscale weather model such as the Weather Research and Forecasting Model , might also provide a more accurate fine scale representation of wind fields than simple IDW of weather stations as used here. A finer network of weather stations, and or controlled experiments with meteorological equipment deployed in advance of a flight at specific fields would also be of benefit. Although more work is needed in order to refine understanding of the water vapor signal in a complex agricultural environment, the results suggest that this technique could be of use for crop water analyses in agricultural areas that experience less variation in crop type, wind, and field size than the Central Valley of California.

Hypatia presents users with an easy-to-use interface that it makes available via any web browser

In the third column, we show results for training during sunny days followed by prediction during rainy periods. January 2nd, 3rd, and 4th were days without precipitation followed by three days with 1.29, 1.06, and 1.0 inches of precipitation respectively. The results show that the model trained only on three rainy days had errors slightly higher than when tested on sunny days, while the model trained on sunny days behaved similarly to the models we discussed before, even when tested on rainy days. Part of our future work is to expand test cases to more variable weather conditions . However,these results indicate that the prediction errors are robust to what are essentially “shocks” to the temperature time series in the explanatory weather data and the predicted variables . Because the CPUs were in sealed containers the effects of precipitation on the CPU series are less pronounced. Still, the errors are largely unaffected by precipitation.Figure 4.7 illustrates the errors when predicting DHT-1 temperature with different subsets of explanatory variables. We observe that if we only rely on the nearby weather station the error is much higher than for a subset that includes at least one of the CPU temperatures . Farmers, today, often use only a weather station temperature reading when implementing manual frost prevention practices. Often, though, the weather station they choose to use for the outdoor temperature is even farther away from the target growing block than the station we use in this study. Notice, also, that when the CPU that is directly connected to the DHT is not included , the errors are higher than when it is included . Thus, as one might expect, proximity plays a role in determining the error. However, using only the attached CPU generates a higher MAE than all CPUs and the weather station . Indeed,vertical planting tower the best performing model is this one that uses all four CPU temperatures and WU-T measurements as explanatory variables, yielding an MAE < 0.5 ◦F across all time frames.

Thus using the nearest CPU improves accuracy, but using only the nearest CPU does not yield the most accurate prediction. Finally, while the weather station data does not generate an accurate prediction by itself, including it does improve the accuracy over leaving it out. In summary, our methodology is capable of automatically synthesizing a “virtual” temperature sensor from a set of CPU measurements and externally available weather data. By including all of the available temperature time series, it automatically “tunes” itself to generate the most accurate predictions even when one of the explanatory variables is, by itself, a poor predictor. These predictions are durable , with errors often at the threshold of measurement error , on average, and relatively insensitive to seasonal and meteoro- logical effects, as well as typical CPU loads in the frost-prevention setting where we have deployed it as part of an IoT system.there are no studies of which we are aware that use the devices themselves as thermometers. To enable this, we estimate the outdoor temperature from CPU temperature linear regression Hastie et al. of temperature time series. Others have shown that doing so is useful for other applications and analyses Guestrin et al. , Xie et al. , Lane et al. , Yao et al. . Our work is complementary to these and is unique in that it combines SSA with regression to improve prediction accuracy. As in other work, we leverage edge computing to facilitate low latency response and actuation for IoT systems Alturki et al. , Feng et al. . With the prior chapters, we have contributed new methods for clustering correlated, multidimensional data and for synthesizing virtual sensors using the data produced from combinations of other sensors. We next unify these advances into a scalable, open-source, end-to-end system called Hypatia. We design Hypatia to permit multiple analytics algorithms to be “plugged in” and to simplify the implementation and deployment of a wide range of data science applications. Specifically, Hypatia is a distributed system that automatically deploys data analytics jobs across different cloud-like systems. Our goal with Hypatia is to provide low latency, reliable, and actionable analytics, machine learning model selection, error analysis, data visualization, and scheduling, in a unified scalable system. To enable this, Hypatia places this functionality “near” the sensing devices that generate data, at the edge of the network. It then automates the process of distributing the application execution across different computational tiers: “edge clouds” and public/private cloud systems.

Hypatia does so to reduce the response latency of applications so that data-driven decisions can be made by people and devices at the edge more quickly. Such edge decision making is important for a wide range of application domains including agriculture, smart cities, and home automation where decisions, actuation, and control are all local and make use of information from the surrounding environment. Hypatia automatically deploys and scales tasks on-demand both locally and remotely – if/when there are insufficient resources at the edge.Users can choose the algorithms they need for data analysis and prediction and select the dataset they are interested in. Hypatia iterates through the list of available parameters, and multiple training and scoring models for each parameter set. It then selects those with the best score. Such model selection can be used to provide data-driven decision support for users as well as to actuate and control digital and physical systems . In this chapter, we focus on Hypatia support for clustering and regression. The Hypatia scheduler automates distributed deployment across edge and cloud systems to minimize time to completion . It uses the computational and communication requirements of model training, testing, and inference, to make placement decisions for independent jobs that comprise a workload. For data-intensive workloads, Hypatia prioritizes the use of the edge cloud. For compute-intensive jobs , Hypatia prioritizes public/private cloud use.Hypatia is an online platform for distributed cloud services that implement common data analytics utilities. It takes advantage of cloud-based, large-scale distributed computation, provides automatic scaling , and implements data management and user interfaces in support of visualization and browser-based user interaction. Hypatia currently supports two key building blocks for popular statistical analysis and machine learning applications: clustering and linear regression. For clustering, Hypatia implements the different variants of k-means clustering. The variants include different distance computations , input data scaling , and the six combinations of covariance matrices. Hypatia runs the configuration for successive values of K ranging from 1 to a user-assigned large number, max_k. For each clustering, Hypatia computes a pair of scores based on both the Bayesian Information Criterion Schwarz and the Akaike Information Criterion Akaike . Hypatia allows the user to change the number of independent, randomly seeded runs to account for statistical variation. Finally, it provides ways for the user to graph and visualize both two-dimensional “slices” of all clusterings as well as the relative BIC and AIC scores.

It uses these scores to provide decision support for the user – e.g. presenting the user with the “best” clustering across all variants. For linear regression, Hypatia implements different approaches for analyzing correlated, multidimensional data , Golubovic et al.. Since we focus on synthesizing new sensors,vertical hydroponic farming we are looking for the most important inputs from other sensors that can be used to accurately estimate a synthesized measurement. Hypatia allows users to decide on the number of input variables and which ones to use. They also can specify the start time of the test, duration of the training and testing periods, the scoring metric to use . Users also choose whether or not to smooth the input data using different techniques . Finally,to predict outdoor temperature, users can select nearby single-board computers and/or weather stations . Once the user makes these choices or accepts/modifies the defaults, Hypatia create an experiment with as many tasks as there are parameter choices. Each task produces a linear regression model with coefficients for each input variable and a score that can be used for model selection. As is done for clustering, Hypatia scores the various parameterizations using the scoring metric to provide decision support to users. The user can then use the visualization tools to verify the similarity between input variables and estimated sensor measurements. Hypatia is unique in that it is extensible – different data analytics algorithms can be “plugged in” easily, and automatically deployed with and compared to others. User can also extend the platform with both scoring and visualization tools. Visualization is particularly important when some of the sensors are faulty and unreliable, or some of the smoothing or filtering techniques do not produce the desired outcome. Figure 5.1 shows such an example where visualization is used to show growers how soil moisture responds to precipitation and temperature on east, and west sides of a tree in an almond grove at different depths of 1 foot and 2 feet . Being able to understand how significant each parameter is to soil moisture provides decision support that can be used to guide irrigation and harvest. To implement Hypatia, we have developed a user-facing web service and distributed, cloud-enabled backend. Users upload their datasets to the web service front end as files in a common, simple format: as comma-separated values . The user interface also enables users to modify the various algorithms and their parameters, or accept the defaults.

Hypatia considers each parameterization that the user chooses as a “job”. Each job consists of multiple tasks that Hypatia deploys. Users can also use the service to check the status of a job or to view the report and results for a job . The status page provides an overview of all the tasks for a job showing a progress bar for the percentage of tasks completed and a table showing task parameters and outcomes. Hypatia uses a report page to provide its recommendation for both analysis building blocks, clustering and regression. For clustering, the recommendation consists of the number of clusters and the k-means variant that produces the best BIC score. This page also shows the cluster assignments, spatial plots using longitude and latitude , BIC and AIC score plots. Hypatia also provides cluster labels in CSV files that the user can download. For regression, the report page consists of a map of error analysis for each model grouped by their parameters. Users can quickly navigate to the model with the smallest error. The software architecture of Hypatia is shown in Figure 5.2. We implement Hypatia using Python v3.6 and integrate a number of open-source software packages and cloud services. At the edge, Hypatia uses a small private cloud that runs Eucalyptus software v4.4Nurmi et al. , Aristotle . The public cloud is Amazon Web Services Elastic Compute Cloud . Hypatia integrates virtual servers from these two cloud systems with different capabilities , which we describe in our empirical methodology. Hypatia is deployed on an edge cloud and a private/public cloud if available. We assume that the edge cloud has limited resources and is located near where data is produced by sensors. The public cloud provides vast resources and is located across a long-haul network with varying performance and perhaps intermittent connectivity. We use NEC to denote the number of machines available in the edge cloud and to denote the number of machines available in the public cloud where NEC << NP C. Users submit multiple jobs to the edge system . Each job describes the datasets to be used for training, testing, and inference or analysis. In some jobs we can assume that the entire dataset is needed while in others we can assume that data can be split and tasks within the job can operate on different parts of the dataset in parallel. Each job has ntasks . In the numerous jobs that we have evaluated over the course of this dissertation, we have observed that for the applications we have studied, n can range tens of tasks to millions of tasks. We consider tasks from the same job as having the same “type”. To estimate the time each task will take to complete the data transfer and computation, we compute an average per job i as ti , across past tasks of the same type. Each task fetches its dataset upon invocation.

The Centaurus implementation consists of a user-facing web service and distributed cloud-enabled backend

Precision farming integrates cyber infrastructure and computational data analysis to overcome the challenges associated with extracting useful information and actionable insights from the vast amount of information that surrounds the crop life cycle. Precision ag attempts to help growers answer key questions about irrigation and drainage, plant disease, insect and pest control, fertilization, crop rotation, and soil health, weather protection, and crop production. Existing precision ag solutions include sensor-software systems for irrigation, mapping, and image capture/processing , intelligent implements , and more recently, public cloud software-as-a-service solutions that provide visualization and analysis of farm data over time OnFarm , Climate Corporation , MyAgCentral , gThrive , WatrHub , PowWow . Current precision ag technologies fall short in three key ways that have severely limited their impact and widespread use: First, they fail to provide growers with control over the privacy of their data and second, they lock growers into proprietary, closed, inflexible, and potentially costly technologies and methodologies. In terms of data privacy, extant solutions require that farmers relinquish control over and ownership of their most valuable asset: their data. Farm data reveals private and personal information about grower practices, crop input , and farm implement use, purchasing and sales details, water use, disease problems, etc.,vertical growing systems that define a grower’s business and competitiveness. Revealing such information to vendors in exchange for the ability to visualize it puts farmers at significant risk Federation , Russo , Vogt . The second limitation of extant precision ag solutions is “lock-in”. Lock-in is a well-known business strategy in which vendors seek to create barriers to exit for their customers as a way of ensuring revenue from continued use, new or related products, or add-ons in the future.

In the precision ag sector, this manifests as proprietary, closed, and fragmented solutions that preclude advances in sustainable agriculture science and engineering by anyone other than the companies themselves. Lock-in also manifests as a lack of support for cross-vendor technologies, including observation and sensing devices, farm implements, and data management and analysis applications. Since farmers face many challenges switching vendors once they choose one, the one they choose can charge fees for training, customizations, add-ons, and use of their online resources without limit because of the lack of competition. The third limitation is that most precision ag solutions today employ the centralized approach described above. As solutions become increasingly on-line , the lock-in also requires that farmers upload all of their data to the cloud giving vendor full control and access, and leaving growers without recourse when vendors go out of business Rodrigues . In addition to these risks, such network communication of potentially terabytes of image and sensor data is expensive and time consuming for many because of poor network connectivity and costly data rates that are typical of rural areas. Finally, many of these technologies impose high premiums and yearly subscriptions ArcGIS . The goal of our work is to address these limitations and to provide such a scalable, data analytics platform that facilitates open and scalable precision agadvances. To enable this, we leverage recent advances in Internet of Things , cloud computing, and data analytics and extend them them to contribute new research that defines a software architecture that tailors each to agricultural settings, applications, and sustainability science. These constituent technologies cannot be used off-the-shelf however because they require significant expertise and staffing to setup, manage, and maintain – which are show stoppers for today’s growers. We attempt to overcome these challenges with a comprehensive, end-to-end system for scalable agriculture analytics that is open source and that can run anywhere , precluding lock-in. To enable this, we contribute new advances in scalable analytics, low-cost sensing, easy to use data visualization, data-driven decision support, and automatic edge-cloud scheduling, all within a single, unified distributed platform. In the next chapter, we begin by focusing on an important analytics building block and tailoring its use for farm management zone identification using soil electrical conductivity data.

Statistical clustering, also known as a separation of measurements into related groups, is a key requirement for solving many analytics problems. Lloyd’s algorithm Lloyd , commonly called k-means, is one of the most widely used approaches Duda et al. . K-means is an unsupervised learning algorithm, requiring no training or labeling, that partitions data into K clusters, based on their “distance” from K centers in a multi-dimensional space. Its basic form is simple to implement and has become an indispensable component of pattern recognition, data mining, image processing, information retrieval, and recommendation applications across fields ranging from marketing and advertising to astronomy and agriculture. While conceptually simple, there is a myriad of k-means algorithm variants based on how distances are calculated in the problem space. Some k-means implementations also require “hyper parameters” that control for the amount of statistical variation in clustering solutions. Identifying which algorithm variant and set of implementation parameters to use in a given analytics setting is often challenging and error-prone for novices and experts alike. In this chapter, we present Centaurus as an approach to simplifying the application of k-means through the use of cloud computing. Centaurus is a web accessible, cloud-hosted service that automatically deploys and executes multiple k-means variants concurrently, producing multiple models. It then scores the models to select the one that best fits the data – a process known as model selection. It also allows for experimentation with different hyper parameters and provides a set of data and diagnostic visualizations so that users can best interpret its results. From a systems perspective, Centaurus defines a pluggable framework into which clustering algorithms and k-means variants can be chosen. When users upload their data, Centaurus executes and automatically scales the execution of concurrently executing k-means variants using public or private cloud resources. To perform model selection, Centaurus employs a scoring component based on information criteria. Centaurus computes a score for each result and provides a recommendation of the best clustering to the user. Users can also employ Centaurus to visualize their data,its clusterings, and scores, and to experiment with different parameterizations of the system .

We implement Centaurus using production-quality, open-source software and validate it using synthetic datasets with known clusters. We also apply Centaurus in the context of a real-world, agricultural analytics application and compare its results to the industry-standard clustering approach. The application analyzes fine-grained soil electrical conductivity measurements, GPS coordinates, and elevation data from a field to produce a “map” of differing soil zones. These zones can then be used by farmers and farm consultants to customize the management of different zones on the farm Fridgen et al. , Moral et al. , Fortes et al. , Corwin & Lesch . We compare Centaurus to the state of the art clustering tool for farm management zone identification and show that Centaurus is more robust, obtains more accurate clusters, and requires significantly less input and effort from its users. In the sections that follow, we provide some background on the use of EC for agricultural zone management. We then describe the general form of the kmeans algorithm, variants for computing covariance matrices, and scoring method that Centaurus employs . Following this, we present our datasets,an empirical evaluation of Centaurus, related research specifically related to Centaurus,outdoor vertical plant stands and summarize our contributions. The soil health of a field can vary significantly and change over time due to human activity and forces of nature. To optimize yields, farmers increasingly rely on site-specific farming in which a field is divided into contiguous regions, called zones, with similar soil properties. Agronomic strategies are then tailored to specific zones to apply inputs precisely, to lower costs and input use, and to ultimately increase yields. Management zone boundaries can be determined with many different procedures: soil surveys with or without other measurements Bell et al. , Kitchen et al. ; spatial distribution estimates of soil properties by interpolating soil sample data Mausbach et al. , Wollenhaupt et al. fine-grain soil electrical conductivity measurements Mulla et al. , Jaynes et al. , Sudduth , Rhoades et al. , Sudduth et al. , Corwin & Lesch , Veris , and a combination of sensing technologies Adamchuk et al. . EC-based zone identification is widely used because it addresses many of the limitations of the other approaches: it is inexpensive, it can be repeated overtime to capture changes, and it produces useful and accurate estimates of many yield-limiting soil properties including compaction, water holding capacity, and chemical composition.

As a result, EC-based management tools are used extensively for a wide variety of field plants Peeters et al. , Aggelopooulou et al. , Gili et al. . To collect EC data, EC sensors are typically attached to a GPS-equipped tractor or all-terrain vehicle and pulled across a field to collect measurements at multiple depths and at a very fine grain spatially . EC maps generated from this data can either be used to directly define management zones or to inform the future, more extensive, soil sampling locations Veris , Lund et al. . Alternatively, EC values can be clustered into related regions using fast, automated, unsupervised statistical clustering techniques and its variants Bezdek , Murphy Fridgen et al. , Molin & Castro , Fraisse et al. , et al . Given the potential and wide-spread use of EC-based zone identification tools that rely on automated unsupervised algorithms, in this chapter we investigate the impact of using different k-means implementations and deployment strategies for EC-based management zone identification. We consider different algorithm variants, different numbers of randomized runs, and the frequency of degenerateruns – algorithm solutions which are statistically questionable because they include empty clusters, clusters with too few data points, or clusters that share the same cluster center Brimberg & Mladenovic . To compare k-means solutions , we define a model selection framework that uses the Bayesian Information Criterion Schwarz to score and select the best model. Past work has used BIC to score models for the univariate normal distribution Pelleg et al. . Our work extends this use to multivariate distributions and multiple k-means variants.The k-means algorithm attempts to find a set of cluster centers that describe the distribution of the points in the dataset by minimizing the sum of the squared distances between each point and its cluster center. For a given number of clusters K, it first assigns the cluster centers by randomly selecting K points from the dataset. It then alternates between assigning points to the cluster represented by the nearest center, and recomputing the centers Lloyd , Bishop , while decreasing the overall sum of squared distances Linde et al. . The sum-of-squared distances between data points and their assigned cluster centers provides a way to compare local optima – the lower the sum of thedistances, the closer to a global optimum a specific clustering is. Note, that it is possible to use distance metrics other than Euclidean distance to compute per-cluster differences in variance, or covariance between data features. Thus, for a given data set, the algorithm can generate a number of different k-means clusterings – one for each combination of starting centers, distance metrics, and a method used to compute the covariance matrix. Centaurus integrates both Euclidian and Mahalanobis distance. The computation of Mahalanobis distance requires computation of a covariance matrix for the dataset.In addition, each of these approaches for computing the covariance matrix can be Tied or Untied. Tied means that we compute a covariance matrix per cluster, take the average across all clusters, and then use the averaged covariance matrix to compute distance. Untied means that we compute a separate covariance matrix for each cluster, which we use to compute distance. Using a tied set of covariance matrices assumes that the covariance among dimensions is the same across all clusters, and that the variation in the observed covariance matrices is due to sampling variation. Using an untied set of covariance matrices assumes that each cluster is different in terms of its covariance between dimensions.Users upload their datasets to the web service frontend as files in a simple format: as comma-separated values .

A major limitation to this study is that the reference dataset was not entirely accurate

The plotted detections in Figure 10 and Figure 12 are counted in the overall AP and AR metrics as successful detections because the IoU of the detection and reference label is greater than 50%. Yet, many pixels that belong to individual center pivot fields are not included in the detection. Across similar scenes, Mask R-CNN does not appear to have the consistent boundary eccentricity bias that FCIS has . Because Mask R-CNN showed better boundary accuracy along with comparably high performance metrics relative to the FCIS model, further comparisons and visual examples comparing accuracy across different field size ranges and with different training dataset sizes used Mask R-CNN instead of FCIS. Many scenes are more complex than the arid landscape with fully cultivated center pivots shaped like full circles in Figure 10. Figure 11 is a representative example of a scene with more complex fields. These include non center pivot fields, center pivots containing two halves, quarters, or other fractional portions in different stages of development , and partial pivots, which is semicircular not because the rest of the circle is in a different stage of development or cultivation, but because of another landscape feature that restricts the field’s boundary . In this scene, at least 25% of the detections in this scene are below the 90% confidence threshold, and many atypical pivots are missed based on this threshold . Figure 12 is a simpler scene that like Figure 12, has a high density of center pivot fields. In this case the detections more closely match the reference labels and detection confidence scores are higher, either because of the FCIS model’s tendency to produce higher confidence scores or because the scene has less variation in center pivot field type. Figure 13 highlights another common issue when testing both models,vertical farming technology that reference labels are truncated by the tiling grid used to make 128 by 128 sized samples from Landsat 5 scenes.

These tend to be missed detections based on the 90% confidence threshold since they are not mapped with a high confidence score. There are cases where no high confidence detections above 90% are produced, such as in Figure 14. In this scene, No Data values in the Landsat 5 imagery, partial pivots near non-center pivot fields, and mixed pivots with indistinct boundaries all result in a scene that is not mapped with high confidence. However, in cases where there is high contrast between center pivot fields and their surrounding environment, they are mapped nearly perfectly by the Mask R-CNN model with high confidence scores . Figures 16 through 18 were selected to illustrate the impact that size range has on detection accuracy, since center pivots can come in various semicircular shapes and sizes. Figure 16 shows that in a scene with no reference labels, no high confidence detections were produced for any size category. The highest confidence score associated with an erroneous detection in this case was at most ~0.66, which is relatively low for both models. Figure 17 shows a case where a large center pivot is mapped accurately, whereas smaller and medium center pivots in the scene are not. This example shows the scale invariance of the Mask RCNN model in that it can accurately map the large center pivot because it looks similar to a medium sized center pivot, only larger. On the other hand, smaller center pivots, partial pivots, and mixed pivots are detected with lower confidences or not detected. Figure 18 highlights a case where a large center pivot is not mapped with a high confidence score above 90%. Unlike Figure 17 where the large center pivot is uniform in appearance, the large center pivot in Figure 18 has a mixture of three different land cover types. This indicates that the inaccuracies from large center pivot detection may come from large center pivots that were partially cultivated or divided into multiple portions with different crop types or cultivation stages. Many false negatives in this scene and others are the result of partial pivots, mixed pivots or pivots that had not been annotated yet in the 2005 dataset.

Small fields are more difficult to detect than large ones and so as expected, removing 50% of the training data available to train the Mask R-CNN model caused large drops in performance. Having more training data available to improve features that are attuned to detect small fields is particularly important with regard to overall model performance. The metric results for small fields is likely biased toward a worse result because many full pivots overlapped a sample image boundary, leading to small, partial areas of pivot irrigation at scene edges being over represented after Landsat scenes were tiled into 128×128 image chips. Since these fields have a less distinctive shape, some full pivots at scene edges were missed. However, since small fields make up a minority of the total population of fields and the medium and large category were more accurate by 20 or more percentage points for both AR and AP, this shows that both the FCIS and Mask R-CNN models can map a substantial majority of pivots with greater than 50% intersection over union. Zhang et al. tested their model on the same geographic locations at a different year and used samples produced from two Landsat scenes to train their model over a 21,000 km^2 area versus the Nebraska dataset which spans 200,520 km^2. These results extend upon the work by Zhang et al. , as the test set is geographically independent from both the training and validation set and 32 Landsat 5 scenes across a large geographic area were used to train and test the model. Furthermore, Zhang et al. ’s approach produces bounding boxes for each field, while Mask R-CNN produces instance segmentations that can be used to count fields and identify pixels belonging to individual fields.While comparing metrics is useful, they don’t indicate how performance varies across different landscapes or how well the quality of the boundary matches reference given that a detection is determined to be correct by having an IoU over 50%. While the FCIS model slightly outperformed the Mask R-CNN model in terms of the medium size category average precision, it also exhibited poorer boundary fidelity. Figure 10 demonstrates arbitrarily boxy boundaries that appear to be truncated by the position sensitive score map voting process that is the final stage of the FCIS model.

The Mask R-CNN model’s high confidence detections that had a confidence score of 0.9 or higher matched the reference boundaries much more closely, showing that this model can be usefully applied to delineate center pivot agriculture in a complex, humid landscape. While the FCIS model could also be employed with post processing to assign a perfect circle to the center of a FCIS detection ,vertical tower planter this would lead to further errors that would overestimate the size and wrongly estimate the shape of partial center pivots. The results from Mask R-CNN on medium sized fields are encouraging because it indicates that the model could potentially generalize well to semi-arid and arid regions outside of Nebraska . In addition, the results from Figures 10, 12, and 15 indicate that where many uniform center pivots are densely colocated and not many non center pivots are present, a higher quantity will be mapped correctly. This is encouraging, since in many parts of the world, center pivots are densely colocated, or are cultivated in semiarid or arid environments, where contrast is high. Therefore, the model can be expected to generalize well outside of Nebraska, though it remains future work to test the model in other regions. False negatives are present for many scenes that are heavily cultivated, and in many cases it is ambiguous whether the absence of a high confidence detection is due to the absence of a center pivot or because of Landsat’s inability to resolve fuzzier boundaries between a field and its surrounding environment . A time series based approach similar to Deines et al. could improve detections so that only pivots that exhibited a pattern of increased greenness would be detected in a given year. However, this requires multiple images within a growing season from a Landsat sensor, which are not always available due to clouds, and also precludes the use of traditional CNN methods and pretrained networks which ease the computational burden of training and detection. Furthermore, this approach is difficult to incorporate into a CNN based method for segmentation, as it precludes the use of pretraining.

Another alternative is to develop higher quality annotated datasets which make meaningful semantic distinctions between agriculture in different stages of cultivation. For example, in Figure 11, brown center pivots that are not detected could instead be labeled as “fallow” or “uncultivated”, and this information could be used to refine the training samples used to train a model to segment pivots within specific cultivation stages. With 4 hours of training on 8 GPUs, the original implementation of Mask R-CNN achieved 37.1 AP using a ResNet-101-FPN backbone on the COCO dataset, a large improvement over the best FCIS model tested, which achieved 33.6 AP . This amounts to a difference of 3.5 AP percentage points, with Mask R-CNN performing better on the COCO dataset. On the Nebraska dataset, for the medium size category, the difference in AP was 3.2 AP percentage points, with the FCIS model outperforming Mask R-CNN. However Mask R-CNN outperformed FCIS in terms of AR by 5.1%. These results indicate that COCO detection baselines are not necessarily reflective of overall metric performance, given that FCIS outperformed Mask R-CNN in the more numerous size category. The improvements on the COCO baseline do reflect the improved boundary accuracy of MAsk R-CNN relative to the FCIS model. The AR and AP results on the Nebraska center pivot dataset are higher relative to Reike , which is to be expected since center pivots are a simpler detection target than fields in the Denmark dataset, which come in more various shapes and sizes. What’s especially notable is that even though Rieke trained the FCIS model on approximately 11 times the training data compared to the training data used in this study, the AP results and AR results on the Nebraska dataset were about 10 to 20 points higher for each of the size categories. The difference for the AP for the small category was 0.42-0.28 = 0.14 , the difference for the medium category was 0.732 – 0.473 = 0.259, and the difference for the large category was 0.734 – 0.51 = 0.224 . Rieke used 159042 samples compared to 13625 samples used in this study. These samples were equivalently sized to this study, at 128×128 pixels. Even though the size categories used in this study and Rieke are not exactly the same, the fact that each of the categories saw substantially better performance for the FCIS model on the Nebraska dataset indicates that the relative simplicity of the center pivot detection target played a substantial role in the jump in performance. Given that Rieke used 11 more training samples, this indicates that the simplicity of the detection target played an even larger role in the performance difference. This is an important lesson for remote sensing researchers looking to use CNN techniques to map fields or other land cover objects, the feasibility of mapping the detection target can be even more important than using an order of magnitude more training data to improve a model’s ability to generalize. These results are comparable to other results achieved for other detection targets. Wen et al. applied a slightly modified version of Mask R-CNN which can produce rotated bounding boxes to segment building footprints in Fujian Province, China from Google Earth imagery. The model was trained on manually labeled annotations across a range of scenes containing buildings with different shapes, materials, and arrangements. Though an independent test set was not used that was separate from the validation set , the model was tested on half of the imagery collected, while the other half was used to train the model, providing a large amount of samples to test the model. The total dataset used to split between training and testing/validation amounted to 2000 500×500 pixel images containing 84,366 buildings .

Citizens of Member States do not have standing to bring WTO-based complaints

As of July 2012, the GENERA database listed 583 scientific studies on the safety of GMO crops and their food ingredients. In addition, the experiential evidence of billions of meals consumed by persons around the world since commercial release of genetically-engineered crops in 1996 supports the safety of genetically-modified foods. Since 1996, there has not been one verified health complaint to humans, animals or plants from genetically-engineered crops, raw foods, or processed foods. Despite some published attempts to deny this overwhelming scientific evidence in support of genetically engineered foods, the scientific consensus is clear —genetically-engineered crops, foods, and processed ingredients do not present health and safety concerns for humans, animals, or plants. SPS Agreement Article 3 sets forth provisions that could save Proposition 37. Paragraph 3.2 affirms a SPS measure that conforms to international standards relating to health and safety. However, Paragraph 3.2 does not protect Proposition 37 because there are no international standards that categorize genetically-engineered raw or processed foods as unsafe or unhealthy. Comparing Proposition 37 to the legal standards in the SPS Agreement shows that Proposition 37 almost assuredly is not compliant with the SPS Agreement. Indeed, the WTO SPS claim against Proposition 37 is so strong that its proponents are probably not going to defend it as meeting the legal standards of the SPS Agreement. Despite its textual language and the electoral advertising emphasizing food safety and health concerns,vertical farming aeroponics proponents will argue that Proposition 37 cannot properly be characterized as a labeling requirement “directly related to food safety.” Proponents of Proposition 37 will seek to have it classified as a technical barrier to trade in order to avoid the SPS Agreement and its scientific evidence standards.

The TBT Agreement applies to technical regulations, including “marking or labelling requirements as they apply to a product, process or production method.” As Proposition 37 imposes mandatory labels, Proposition 37 is a technical regulation under the TBT definitions. TBT Article 2 sets forth several provisions against which to measure technical regulations for compliance with the TBT Agreement. It states, “Members shall ensure that technical regulations are not prepared, adopted or applied with a view to or with the effect of creating unnecessary obstacles to international trade. For this purpose, technical regulations shall not be more trade-restrictive than necessary to fulfill a legitimate objective, taking account of the risks non-fulfillment would create. Such legitimate objectives are, inter alia, … the prevention of deceptive practices; protection of human health or safety, animal or plant life or health, or the environment. …” Article 2.2 expressly lists three legitimate objectives: national security requirements; protection of human health or safety, animal or plant life or health, or the environment; and prevention of deceptive practices. As for health and safety, Proposition 37 does not provide a label giving consumers information about how to use a product safely or a safe consumption level or any other health and safety data—unless the warning-style label against genetically-modified food itself is considered a valid warning. But, as discussed with regard to the SPS Agreement, there is no scientific evidence available to indicate that genetically modified foods have negative health or safety implications for humans, animals, or the environment. Proposition 37 does not assert a legitimate health and safety objective under TBT Article 2.2.Proposition 37 can be defended as upholding the third legitimate objective—prevention of deceptive practices. Indeed, the Proposition is titled the “California Right to Know Genetically Engineered Food Act,” indicating that labels will assist California consumers in knowing what they are purchasing and avoiding purchases that they desire to avoid. Those who would challenge Proposition 37 for noncompliance with the TBT Article 2.2 will argue that Proposition 37 is not a protection against deceptive practices. Opponents can point to the structure of the proposed Act and its exemptions to provide evidence that Proposition 37 will actually confuse consumers more than inform them accurately.

Proposition 37 exempts foods that lawfully have the USDA Organic label. Under the USDA National Organic Program , organic foods can contain traces of unintentional genetically-modified crops or ingredients without losing the organic label. Simultaneously, those California consumers still will be eating unlabeled food products containing genetically modified crops or ingredients at trace levels, except those products will carry the label “USDA Organic.” In other words, opponents of Proposition 37 will argue that Proposition 37 is itself the deceptive labeling practice and, thus, fails to promote a legitimate objective under TBT Article 2.2. Proponents of Proposition 37 will respond by citing to the recent WTO Dispute Resolution Appellate Body relating to the challenge of Canada and Mexico against the United States country-of-origin label for meat. The WTO Panel ruled against COOL on the grounds of a violation of TBT Article 2.2 because the COOL law would confuse consumers. But the WTO Appellate Body reversed this Panel ruling and determined that COOL did provide information as a legitimate objective under Article 2.2.Aside from “legitimate objectives,” TBT Article 2.2 also requires that technical regulations not be “unnecessary obstacles to international trade” and “not more trade-restrictive than necessary.” Opponents of Proposition 37 will argue that it violates these TBT obligations primarily because consumers already have labels that provide the same level of consumer protection from deception. Opponents will point to the existence of the Non-GMO label and the USDA-Organic label that allow consumers to choose foods which will have minimal levels of genetically-engineered content. These Non-GMO and USDA-Organic labels are voluntary labels that do not impose legal and commercial burdens upon other food products in international trade. TBT Article 2.1 also provides a standard against which to measure Proposition 37 by stating, “Members shall ensure in respect of technical requirements, products imported from the territory of any Member shall be accorded treatment no less favorable than that accorded like products of national origin and to like products originating in any other country.”TBT Article 2.1 requires Members to treat “like products” alike and to refrain from favoring either domestic or other international “like products” as against the products of the Member bringing the Article 2.1 complaint.

Obviously, proponents of Proposition 37 consider genetically-engineered agricultural products as fundamentally different than organic and conventional agricultural products. Proponents will argue that Proposition 37 deals with genetically-engineered agricultural products that constitute a class of products of their own.Opponents of Proposition 37 will respond with two arguments. Opponents can argue that regulatory agencies around the world have considered genetically-engineered raw agricultural products to be substantially equivalent in every regard to conventional and organic agricultural products. Opponents will argue that the substantive qualities of genetically-engineered agricultural products are “like products” and that the process producing the “like products” does not create a separate product classification. Opponents will argue “product” over “process” as the appropriate TBT Article 2.1 interpretation. Opponents of Proposition 37 will also present a second argument. More precisely, opponents of Proposition 37 will highlight the fact that Proposition 37 imposes labels, testing, and papertrail tracing on vegetable oils even though the oil has no DNA remnants of the crop from which the oil came. Soybean oil is soybean oil regardless of what variety of soybean the food processor crushed to produce the oil. With regard to the TBT Article 2.1 arguments,vertical indoor hydroponic system opponents of Proposition 37 may gain support from the Canada and Mexico WTO complaints against the U.S. COOL law. Both the WTO Panel and the WTO Appellate Body determined that Canadian and Mexican meat was a “like product” to United States meat. As a “like product,” the WTO reports ruled that the U.S. COOL law violated TBT Article 2.1 by imposing discriminatory costs and burdens on meat imported into the United States.TBT Articles 2.4 and 2.5 provide a safe harbor for technical regulations if those technical regulations adopt international standards. However, the Codex Alimentarius Commission, the international standards body for food labels, has not created an international standard which proponents of Proposition 37 can claim as its origin and safe harbor.SPS Agreement Article 11 and TBT Agreement Article 14 are both titled “Consultation and Dispute Settlement.” Thereby the SPS Agreement and the TBT Agreement make explicit that Member States to these agreements can complain using the WTO Dispute Settlement Understanding Agreement. For example, Argentina or Brazil or Canada—all likely to be affected by Proposition 37 for the export of soybeans and canola, especially for cooking oils—have the treaty right to file a complaint within the WTO dispute resolution system. Bringing a WTO complaint is fraught with difficulties. Members must think politically and diplomatically about whether it is worthwhile to bring a complaint—even a clearly valid complaint. Members must be willing to expend significant resources in preparing, filing, and arguing WTO complaints. Finally, even if a Member prevails in the Panel or Appellate Body reports, Members recognizes that its WTO remedies are indirect and possibly not fully satisfactory. Although the United States is a Member of the WTO Agreements, the United States, in contrast to Argentina, Brazil and Canada, is not an exporting Member to California.

Consequently, the United States cannot file a WTO complaint invoking the DUS Agreement against California. But by being a Member of the WTO Agreements, the United States has ratified these treaties as part of the law of the United States, transforming these treaties into the supreme law of the land under the U.S. constitution. Moreover, under the WTO Agreements, the United States has the duty to ensure that local governments comply with the WTO Agreements. Therefore, the United States has the legal authority to challenge Proposition 37 in order to protect its supreme law of the land and to avoid violating its WTO obligations.Opponents of Proposition 37 are likely to challenge Proposition 37 immediately if California voters adopt it in November 2012. As indicated in the introduction, these opponents are likely to bring challenges on three different grounds under the U.S. Constitution. These opponents have non-frivolous grounds upon which to pursue these U.S. constitutional challenges. Whether these opponents can add a claim challenging Proposition 37 based on alleged violations of the SPS Agreement or the TBT Agreement is much less clear. TBT Agreement Article 14.4 highlights that the opponents will have difficulty in bringing a WTO-based challenge. TBT Article 14.4 makes clear is that Member States have the legal status to bring WTO-based complaints.Proponents of Proposition 37 will challenge the standing of those opponents who seek to challenge Proposition 37. Proponents will seek to have this WTO-based claim dismissed because the opponents do not have a right to make a legal claim based on the WTO. Proponents will argue that standing to bring a WTO-based claim resides solely in exporting Member States or the United States. By contrast, opponents bringing the immediate challenge containing a WTO-based claim will argue that they are not invoking the WTO Agreements directly. Opponents will argue that they are challenging Proposition 37 to enforce the supreme law of the United States. By invoking the supreme law of the United States, opponents will hope to blunt the standing issue and to avoid dismissal of the WTO-based claim.Assuming that the United States does not file a lawsuit against California and that other opponents are blocked, by the doctrine of standing, from raising WTO-based challenges, Proposition 37, if adopted in November 2012, would become California law. Thus, the first lawsuits related to Proposition 37 would come through either administrative action or a consumer lawsuit against food companies and grocery stores alleging failure to label or misbranding. When facing administrative actions or consumer lawsuits, food companies and grocery stores will want to respond with all possible legal challenges to Proposition 37. Food companies and grocery stores will want to raise the issues of whether Proposition 37 complies with the SPS Agreement and the TBT Agreement as defenses to being found liable for administrative penalties or consumer damages. The agency or consumer bringing the lawsuit against the food company or grocery store will argue that the food company or grocery store does not have standing to raise the WTO-based challenges. The plaintiff likely has to concede that the defendant faces an actual injury. However, the plaintiff will contest vigorously that the defendant is not within the zone of interests that the WTO Agreements mean to protect. In other words, the plaintiff will argue that the WTO Agreements only mean to protect sovereign interests and not private commercial interests.

Nitrous oxide emissions alone accounted for approximately 26% of the total

However if the timing and controls on hot moments are unknown or sporadic, less frequent sampling may significantly underestimate N2O emissions . Our results suggest that roughly 8,000 randomized individual chamber flux measurements would be needed to accurately estimate annual N2O budgets from these agricultural peat lands with a 95% confidence interval and 10% margin of error, assuming the drivers of hot moments were not well understood. Approximately 500 individual measurements would yield a 50% margin of error. Given the more sporadic nature of CH4 hot moments, our results suggest that it is even more difficult to accurately estimate CH4 fluxes with periodic sampling in these ecosystems. Analyses found that at least 17,000 and 2,500 individual flux measurements would be needed to estimate annual CH4 budgets within a 10% and 50% margin of error, respectively. The agricultural maize peat land soil studied here was a much larger source of soil GHG emissions than other maize agroecosystems. While agricultural peat soils are highly productive, average annual GHG emissions were 3.6-33.3 times greater on an area-scaled basis and 3-15.6 times greater on yield-scaled basis relative to other agricultural maize emissions estimates. We conducted an upscaling exercise as a first approximation of the potential impacts of maize peat land fluxes on regional GHG budgets. Our estimates suggested that maize agriculture on similar peat soils in the region could emit an average of 1.86 Tg CO2e y-1 .This value is significantly higher than previous estimates for the region and highlights the importance of including high frequency N2O measurements to capture hot moments in N2O fluxes,plastic pots 30 liters the disproportionate impact N2O emissions have on agricultural peat land GHG budgets, and that these agricultural peat lands are significant N2O sources.

We also found that irrigation timing and duration, not fertilization, was the predominant driver of N2O and CH4 emissions and a significant source of the total GHG budget. Determining management strategies that reduce soil N2O and CH4 emissions, particularly changes in flood irrigation timing and duration, could have a disproportionate impact on reducing total agricultural peat land GHG emissions .Although legends of humans using coffee in Ethiopia date back as early as 875 A.D., the earliest verifiable evidence of human coffee consumption occurs in Yemen in the 15th century. At this time, it was illegal to bring unroasted coffee out of Arabia, and strict measures were taken to ensure that viable coffee seeds did not leave the country. The birth of coffee production in India is attributed to the Indian Muslim saint Baba Budan, who, on his return from a pilgrimage to Mecca, allegedly smuggled seven coffee beans out of Arabia by hiding them in his beard. In 1670 he planted these seeds in Karnataka, and cultivation soon spread throughout the state and into neighboring regions. The first large-scale plantations arose with British colonization and spread rapidly throughout South India, fueled by increasing demand for export to northern latitudes. The proliferation of coffeehouses in Western Europe during this era proved to have substantial social consequences. Also known as “Penny Universities” since the price of entry and a cup of coffee was commonly one penny, coffeehouses in 17th century Britain came to play an important role in social and political discourse. In a society with such a rigid socioeconomic class structure, coffeehouses were unique because they were one of the only places frequented by customers of all classes.Thus they became popular establishments for discourse and debate, open to all classes and unfettered by the structure of academic universities. Intellectuals found in “the hot black liquor a curious stimulus quite unlike that produced by fermented juice of grape.”English coffeehouses “provided public space at a time when political action and debate had begun to spill beyond the institutions that had traditionally contained them,” and because of this, are widely accepted as playing a significant role in birthing the age of Enlightenment in Europe.While coffee was bringing the Enlightenment to Western Europe, the commodity was having opposite effects in the regions where it was being produced.

In India, the age of British plantations was rife with suffering and oppression, as slavery and forced labor were common practice. Historical research reveals that “during Europe’s industrial revolution and rise of bourgeois society, slavery, coffee production, and plantations were inextricably linked.”Historical records indicate that in the 1830s, the East India Company held over 247,000 slaves in Wayanad the Malabar coast alone.Even after slavery was officially abolished in 1861, so-called “agricultural slavery” and indentured labor on plantations continued. 8 According to historical accounts, indentured laborers were treated almost identically as they were during the height of slavery. To this day, an estimated 18.3 million people in India and 46 million people worldwide live in conditions of modern defacto slavery, such as bonded labor, human trafficking, and forced marriage. The global coffee market has always been volatile. Plagued by unpredictable harvests, susceptibility to weather events, and massive disease outbreaks, regional coffee production has risen and fallen dramatically over the centuries. For example, in the late 19th century in Sri Lanka an outbreak of the fungal pathogen known as “coffee rust” caused 90 percent of area under coffee cultivation on the island to be abandoned. 11 This past century has been no different for India. As the Great Depression affected coffee exports around the world in the 1930s, the Coffee Board of India was established to protect farmers and promote consumption of coffee. The Coffee Board of India, run by the federal government’s Ministry of Commerce and Industry, pooled farmers’ coffee for export at a set price. This provided price stability for farmers but also eliminated incentives to improve quality. From 1991 – 1996 a series of economic reforms relegated the coffee market in India entirely to the private sector. Immediately thereafter, the price of coffee fell from its 1997 levels of around $2.50 per pound to a staggering 45 cents per pound in 2002, the lowest it has been in over fifty years.India was not alone in this plight. While certainly not the only cause of financial insecurity among farmers, the spread of neoliberalism and free trade in the global commodity market has historically been associated with large increases in price volatility and overall downward trends in price, which has had deleterious effects for small-scale producers who depend on these markets for their livelihoods.

Especially in the 1980s and 1990s, growth and consolidation among multinational commodity traders led to a relative loss of market power among producing nations, while foreign pressure from international donors forced many of those nations to privatize their commodity export authorities against their own best interests.This has led to income instability and poverty for many coffee farmers around the world. The coffee farmers of Kerala are facing many of the same challenges that currently plague coffee farmers all over the world. In recent years the global price of coffee has fell drastically from $2.88-per-pound in 2011 to 93 cents-per-pound as of May 2019.While maintaining its downward trend over the past decade, the price continues to fluctuate wildly, making it impossible for farmers to budget their yearly expenses. It is not unheard of for the price to even dip below an individual farmer’s production costs,round plastic pots leaving powerless farmers forced to sell their harvest at a loss, or let it spoil in the fields and get nothing at all. How is it possible that coffee farmers are selling their harvest for less than what it cost them to produce it? While this seems paradoxical to the very basis of economics, it is a common situation facing farmers of many different cash crops, where prices are determined by what are called “buyer-driven supply chains.” While many factors go into the creation of buyer-driven supply chains, some of the few largest factors are discussed below. All this to say, farmers do not have the capacity to determine the price they get for their own products. Prices are driven by market conditions, speculation, futures contracts, and corporate interests who control the majority of world-market shares. With the growth of powerful commodities traders and the liberalization of international markets, prices for coffee and incomes for farmers have reached historic lows. This has led to an increasingly tenuous existence for those who already struggle to get by. Historically, coffee cultivation consisted of only one plant species, Coffea arabica. Today, Coffea arabica still makes up most of the world’s coffee production , but cultivation of another species, Coffea canephora, also known as robusta coffee, is growing due to its higher levels of hardiness and productivity.In addition, a very small amount of a third species Coffea liberica is grown. Although modern coffee production is currently limited to the scope of these three species, a large diversity of sub-varieties and hybrids are grown throughout the world, each with their own unique flavors and characteristics. Coffea arabica is widely lauded as having the best cup quality, and consistently fetches a higher price on the global commodity market. It also tends to grow better in slightly shaded conditions, making it conducive to traditional inter cropping methods.

In India, Coffea arabica is usually grown under the shade of other cultivated trees, such as jackfruit and areca nut, or under the shade of native forest trees, which are used to support vines of black pepper.In the under story below the coffee plants ginger, clove and turmeric are grown. In addition to sustaining families of farmers for generations, a recent study has shown that these multi-species farms support much higher levels of animal biodiversity than conventional monocultures, and that they sequester soil carbon at the same rate as surrounding rain forests. However, the rise of C. canephora as a cash crop has changed things in Kerala. Due to its higher yields and tolerance to pests such as coffee rust C. canephora plantations have replaced multi-species C. arabica farms over huge swaths of India in recent decades. Today, nearly 80% of coffee grown in Wayanad and surrounding regions is C. canephora. Since this robusta species prefers full-sun conditions, this shift away from C. arabica is associated with the removal of shade trees and a proliferation of full-sun monoculture coffee plantations. This has had substantial consequences for biodiversity, erosion, watershed management, and other ecosystem services.This has the potential to negatively impact the small amount of C. arabica that remains in Kerala. Studies indicate that deforestation can lead to a hotter and drier local climate.Coffea arabica is a finicky plant, thriving in a narrow temperature range between 18˚ – 21˚ Celcius.It follows that this pattern of tree removal could lead to conditions in Kerala becoming less ideal for Coffea arabica. This would suggest the potential for a feedback loop, in which robusta production and the associated deforestation lead even more farmers to convert to robusta in order to cope with changing environmental conditions. If climate change is occurring in Kerala, it would not only be threatening cultivated coffee, but also a multitude of wild species. At least six species of wild coffee are known to occur in India.According to a recent study there are now 124 known species of wild coffee, each with their own under-studied and potentially useful characteristics, such as drought or pest resistance, unique flavor profiles, or naturally decaffeinated beans.Of these, an estimated 60% are threatened with extinction due mostly to climate change and habitat loss.The following analysis examines the local climate of Wayanad in recent decades to determine if any changes are occurring. Farmers interviewed during a field visit to Kerala assert that local conditions have become hotter and drier, especially during specific times of the year that are important to the life cycle of the coffee plant. The farmers of Wayanad have suggested an increasingly unpredictable monsoon season, a failure of the “blossom rains” in early spring, and a decrease in November showers. The following study was conducted to corroborate the personal experience of these farmers, and, in the event that trends are found, to determine if causal factors point to global-scale or local forcings. The district of Wayanad in the State of Kerala, India is a mountainous tropical region with altitudes ranging from 700 to 2100m above sea level, daily temperature minimums from 14˚ – 20˚ C, and daily temperature maximums from 25 – 32˚ C.

The application of an appropriate photochemical model could answer this unknown

Although some microorganisms also fix nitrogen, they do not represent significant sources of atmospheric NH3 on Earth. Likewise, the associated detection of N2O and other nitrogen-containing species would provide confidence that the production of NH3 is associated with industrial disruption of a planetary nitrogen cycle. It is worth emphasizing that NH3 or N2O alone would not necessarily be technosignatures, as either of these species could be false positives for life or could arise from nontechnological life . Rather, it is the combination of NH3 and N2O that would indicate disruption of a planetary nitrogen cycle from an ExoFarm, which may also show elevated abundances of NOx gases as well as CH4. The short lifetime of NH3 in an oxic atmosphere implies that a detectable abundance of NH3 would suggest a continuous production source. Although NH3 could be produced abiotically by combining N2 and H2, an atmosphere rich in H2 would be unstable to the O2 abundance required to sustain photosynthesis. The technosignature of an ExoFarm would therefore require the simultaneous detection of both NH3 and N2O in the atmosphere of an exoplanet along with O2, H2O, and CO2.Large-scale agriculture based on Haber–Bosch nitrogen fixation could be detectable through the infrared spectral absorption features of NH3 and N2O as well as CH4. A robust assessment of the detectability of such spectral features in an Earth-like atmosphere would ideally use a three-dimensional coupled climate–chemistry model to calculate the steady-state abundances of each of these nitrogen-containing species a function of biological and technological surface fluxes. But as an initial assessment,hydropopnic barley fodder system we consider a scaling argument to examine the spectral features that could be detectable for present-day and future Earth agriculture.

We define four scenarios for considering agriculture on an Earth-like planet, with the corresponding atmospheric abundances of nitrogen-containing species listed in Table 1. The present-day Earth scenario is based on recent measurements of NH3, N2O, and CH4 abundances . The choice of 10 ppb for NH3 is toward the higher end for Earth today and corresponds to regions of intense agricultural production. The preagricultural Earth scenario serves as a control, where the agricultural and technological contributions of NH3, N2O, and CH4 have been removed. Note that this approach assumes that eliminating the technological contributions to the atmospheric flux of these nitrogen-containing species will reduce the steady-state atmospheric abundance by a similar percentage; this approach is admittedly simplified, but the results can still be instructive for identifying the possibility of detectable spectral features. The third and fourth scenarios project possible abundances of NH3, N2O, and CH4 for futures with 30 and 100 billion people, respectively. Earth holds about 7.9 billion people today, and population projections differ on whether or not Earth’s population will stabilize in the coming century . These two population values were selected because they correspond approximately to the maximum total allowable population using all current arable land and all possible agricultural land . Most published estimates of Earth’s carrying capacity range from about 8 to 100 billion, although some estimates are less than 1 billion while others are more than 1 trillion . Theoretically, an extraterrestrial population with the energy requirements of up to 100 billion calorie consuming humans could sustain Haber–Bosch synthesis over long timescales, as long as sustainable energy sources are used . These scenarios also follow a scaling argument by assuming that the per-person contributions of these three nitrogen-containing species will remain constant as population grows. This again is a simplifying assumption that is intended as an initial approach to understanding the detectability of such scenarios.

We consider the detectability of all four of these scenarios using the Planetary Spectrum Generator . PSG is an online radiative transfer tool for calculating synthetic planetary spectra and assessing the limits of detectability for spectral features that can range from ultraviolet to radio wavelengths. The ultraviolet features of NH3, N2O, and CH4 are strongly overlapping and only show weak absorption, but mid-infrared features of all these species could be more pronounced. The mid-infrared spectral features of NH3, N2O, and CH4 calculated with PSG for preagricultural, present-day, and future Earth scenarios are plotted in Figure 1, which shows the relative intensity and transmittance spectra for observations of an Earth-like exoplanet orbiting a Sun-like star. The spectra shown in Figure 1 show the strongest absorption features due to NH3 from 10 to 12 μm, while N2O shows absorption features from 3 to 5 μm, 7 to 9 μm, and 16 to 18 μm. Absorption features due to CH4 overlap some of the N2O features from 3 to 5 μm and 7 to 9 μm. The change in peak transmittance between 10 and 12 μm for NH3 compared to the preagricultural control case is about 50% for the future Earth scenario with 100 billion people and about 25% for the scenario with 30 billion people. For N2O, the change in peak transmittance between 16 and 18 μm compared to the preagricultural control case is about 70% for 100 billion people and 50% for 30 billion people. The change in relative intensity for the 100 billion people scenario is up to about 10% compared to the preagricultural control case between 7 and 9 μm and 10 and 12 μm. Present-day Earth agriculture would exert a weakly detectable signal that might be difficult to discern from the preagricultural control case, but future scenarios with enhanced global agriculture could produce absorption features that are easier to detect. The spectral features of NH3, N2O, and CH4 could be detectable in emitted light or as transmission features for transiting planets. Specifically, the N2O line at 17.0 μm shows a strong dependency with the N2O volume mixing ratio and to a second order the NH3 line at 10.7 μm. For the future 100 billion case, both display strong enough absorption to bdetectable by the Large Interferometer for Exoplanets , Origins and Mid-InfraRed Exo-planet CLimate Explorer infrared mission concepts.

The James Webb Space Telescope Near Infrared Spectrograph could potentially detect CH4 within the 0.6–5.3 μm range for transiting exoplanets . However, the detection of CH4 alone would provide no basis for distinguishing between technological, biological, or photochemical production. The detectability of these spectral features do not necessarily directly correspond to the peak transmittance, and a full accounting of the detectability of each band would need to account for the observing mode and instrument parameters. It is beyond the scope of this present paper to present detectability calculations for specific missions, as any missions capable of searching for mid-infrared technosignatures are in an early design phase, at best. One of the goals of this Letter is to highlight the importance of examining mid-infrared spectral features of exoplanets,livestock fodder system as many potential technosignatures could be most detectable at such wavelengths. Also, it demonstrates the duality of the search for bio-signatures and technosignatures. The search for passive, atmospheric technosignatures does not require the development of a dedicated instrument but can leverage the capability of instruments dedicated to the search for bio-signatures.The calculations presented in this Letter indicate the possibility of detecting a technosignature from planetary-scale agriculture from the combined the spectral features of NH3 and N2O, as well as CH4. The signature of such an ExoFarm could only occur on a planet that already supports photosynthesis, so such a planet will necessarily already show spectral features due to H2O, O2, and CO2. The search for technosignatures from extraterrestrial agriculture would therefore be a goal that supports the search for bio-signatures of Earth-like planets, as the best targets to search for signs of nitrogen cycle disruption would be planets already thought to be good candidates for photosynthetic life. A better constraint on the detectability of the spectral features of an ExoFarm would require the use of an atmospheric photochemistry model. This Letter assumed simple scaling arguments for the abundances of nitrogen containing species, but the steady-state abundance of nitrogen containing atmospheric species will depend on a complex network of chemical reactions and the photochemical impact of the host star’s UV spectrum. In such future work, the increases of NH3 and N2O, and CH4 from agriculture would be parameterized via surface fluxes instead of arbitrary fixed and vertically constant mixing ratios. A network of photochemical reactions would then determine the vertical distribution of those species in the atmosphere. A photochemical model could also capture the processes of wet and dry deposition of NH3, which is the major sink in Earth’s present atmosphere, as well as aerosol formation from NH3 and SO2/N2O that can occur in regions of high agricultural production. Past studies have predicted more favorable build-up of bio-signature gases on oxygen-rich Earth-like planets orbiting later spectral type stars due to orders of magnitude less efficient production of OH, O, and other radicals that attack trace gases like CH4 .

The photochemical lifetime of N2O and therefore its steady-state mixing ratio will be enhanced by less efficient production of O radicals that destroy it. However, because deposition is the major sink of NH3, it is not clear whether a different stellar environment would alter the atmospheric lifetime of NH3, and if so, to what extent.Examining the four scenarios in this study with such a photochemical model would require additional development work to extend the capabilities of existing models to oxygenrich atmospheres. Past photochemical modeling studies that have included NH3 considered anoxic early Earth scenarios where the focus was determining the plausible greenhouse impact of NH3 to revolve the faint young Sun paradox . More recent studies have considered NH3 bio-signatures in H2-dominated super-Earth atmospheres, which would greatly favor the spectral detectability of the gas relative to high molecular weight O2-rich atmospheres . On H2 planets with surfaces saturated with NH3, deposition is inefficient, and sufficient biological fluxes can overwhelm photochemical sinks and can allow large NH3 mixing ratios to be maintained . These “Cold Haber Worlds” are far different from the O2–N2 atmosphere we consider here, where surfaces saturated in NH3 are implausible and photochemical lifetimes are shorter. Ideally, future calculations would use a three-dimensional model with coupled climate and photochemical processes suitable for an O2–N2 atmosphere to more completely constrain the steady-state abundances, and time variation, in nitrogencontaining species for planets with intensive agriculture. Future investigation should also consider false-positive scenarios for NH3 and N2O as a technosignature. One possibility is that a species engages in global-scale agriculture using manure only; such a planet could conceivably accumulate detectable quantities of NH3 and N2O without the use of the Haber–Bosch process. The distinction between these two scenarios might be difficult to resolve, but both forms of agriculture nevertheless represent a technological innovation. Whether or not similar quantities of NH3 and N2O could accumulate on a planet by animal-like life without active management is a possible area for future work. External factors such as stellar proton events associated with flares could also produce high abundances of nitrogen-containing species in an atmosphere rich in NH3 , so additional false-positive scenarios should be considered for planets in systems with high stellar activity. This Letter is intended to present the idea that the spectral signature of extraterrestrial agriculture would be a compelling technosignature. This does not necessarily imply that extraterrestrial agriculture must exist or be commonplace, but the idea of searching for spectral features of an ExoFarm remains a plausible technosignature based on future projections of Earth today. Such a technosignature could also be long-lived, perhaps on geologic timescales, and would indicate the presence of a technological species that has managed to coexist with technology while avoiding extinction. Long-lived technosignatures are the most likely to be discovered by astronomical means, so scientists engaged in the search for technosignatures should continue to think critically about technological processes that could be managed across geologic timescales. J.H.M. gratefully acknowledges support from the NASA Exobiology program under grant 80NSSC20K0622. E.W.S. acknowledges support from the NASA Interdisciplinary Consortia for Astrobiology Research program. T.J.F and R.K.K. acknowledge support from the GSFC Sellers Exoplanet Environments Collaboration , which is supported by NASA’s Planetary Science Divisions Research Program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of their employers or NASA.

Detractors warned consumers of substantial food cost increases due to the extremely low threshold

As a result of a practical labeling scheme, the Japanese consumer can purchase non-GM products that are not organic, an option that would all but disappear with Prop 37 in California. Furthermore, in Japan, like in Australia, highly processed products such as canola oil, produced with GM crops, are exempt from labeling. In contrast, the same canola oil would have to bear a cautionary label under Prop 37, in spite of difficulties testing whether the oil has indeed been derived from GM canola. Earlier this year, the American Medical Association formally opposed the mandatory labeling of GM food. The National Academy of Sciences and the World Health Organization previously reached similar conclusions–there is no science-based justification for mandatory labeling of GM food because there is no evidence that such foods pose any risks to human health. Because it will be interpreted as a warning, mandatory labeling would imply a food safety risk that does not exist, and this in itself would be misleading to consumers. If passed, the full economic effects of Prop 37 are uncertain but there is no doubt that the measure would remove most of the certified non-GM processed foods from the California market because of the zero tolerance criterion for low levels of unintended material. Food manufacturers and retailers would be unwilling to supply a large number of both GM and non-GM processed food products due to litigation risk. For instance, there would be a change in the selection of corn flakes boxes on the food shelf. The consumers’ choice would be either organic corn flakes or corn flakes labeled as possibly containing GM. It is believed that 70–80% of processed food intentionally contain some corn,macetas cuadradas canola or soy ingredients, so these products would have to be labeled, reformulated with non-GM substitutes, or removed.

Other processed food products that do not use soy, corn, or canola could also be affected and require labeling, because they might contain unintended trace amounts of corn, canola or soy. As a consequence, Prop 37 would result in many products on the food shelf carrying a GM label. It might get to the point where there are so many products with GM labels that most consumers would just ignore the labels because they would be everywhere. For foods that contain a relatively small amount of corn or soy ingredients, the food industry could either label their products as GM or look for alternative, and possibly inferior, non-GM substitute ingredients to avoid labeling. For instance, food companies would have an incentive to use alternative ingredients such as imported palm oil to replace soybean or canola oil, despite potential health problems associated with palm oil and environmental concerns due to palm oil expansion in Asia. Mandatory labeling requirements could inhibit further development of GM technology in California’s food industry. The United States has criticized the EU’s mandatory GM labeling as being nothing more than international trade protection from foreign competition. In fact, over the last twenty years, the USDA, the FDA and the State Department, under successive administrations from both sides of the political spectrum, have publicly opposed this type of regulation at the international level because of its market distorting effects. Prop 37 may also be interpreted as an attempt to stifle competition and distort markets. In this article we outline the economic implications of GM food labeling programs to provide insight into the likely effects of introducing mandatory labeling of GM foods in California under Prop 37. Supporters of Measure 37 argue that labeling provides California consumers additional information and allows them to avoid consuming GM food. But California food consumers have that choice now. They can purchase from three different food categories: 1) conventional foods , 2) organic foods , or 3) voluntarily labeled non-GM food that is not organic. Compare this current situation to the likely outcome under Prop 37 . For targeted food products derived from GM grains, Prop 37 will most likely replace the existing three food categories listed above with just two categories: 1) organic, or 2) products labeled as “may be produced with genetic engineering.” In other words, there will be numerous GE labeled products.

For highly processed food products, a non-labeled option will remain but may only make sense using either lower grade or more expensive alternative ingredients. In general the organic suppliers will gain market share because the producers of most certified non-GM foods will have to change their label to read “may contain GM,” whereas the organic label will not be forced to change, even if the organic product has the same trace amount of GM as the non-GM counterpart. Since the perunit cost of producing non-GM crops is less than organic crops, overall food prices will rise on average as non-GM food products lose market share.Table 1 summarizes the key features of Prop 37–The California Right to Know Genetically Engineered Food Act. If passed, it will require retail labeling of some raw agricultural GM commodities as being “genetically engineered” and processed foods containing GM ingredients as “ partially produced with genetic engineering.” Exemptions from labeling would be granted to alcoholic beverages, restaurant and ready-made food, foods “entirely” derived from animals, and any food certified as USDA Organic. Also exempt would be any raw agricultural commodity that could be certified that it was produced without the intentional use of GE seed. Furthermore, Prop 37 would prohibit food labels with the message “natural,” “naturally grown,” or anything similar. The initiative charges the California Department of Public Health with enforcement, which the Legislative Analyst Office predicts will cost $1 million annually. Prop 37 sets purity standards for non-GM food that are much higher than existing standards for organic food. Organic certification is “process based,” which means that as long as the farm is an approved organic farm, following the prescribed agronomic practices, there is less industry concern over accidental contamination and therefore no regular testing for GM. Unlike Prop 37, USDA organic standards do not have a strict “zero tolerance” standard for accidental presence of GM material. In fact, the USDA has not established a threshold level for adventitious presence of GM material in organic foods. Organic growers are listed among the coalition of supporters of Prop 37, which is understandable because of the exemption provided to them by Prop 37. If Prop 37 passes, a food product could be labeled as organic and escape the testing and litigation issues facing a similar non-organic product even if both products contained identical accidental trace amounts of GM material. Mandatory labeling is unnecessary because voluntary labeling now gives California consumers a choice to purchase food products that do not contain GMOs . One existing voluntary “GM-free” labeling program is the Non-GMO Project, a verification process organized by food retailers such as Whole Foods Market.

The Non-GMO project uses the same 0.9% threshold as the EU and under this scheme, retailers receive a price premium for selling non-GM products. Whole Foods carries numerous Non-GMO products under its private label, 365 Everyday Value®, and many of these products are also organically produced. Similarly, all food products sold at Trader Joe’s with the Trader Joe’s label are sourced from non-GM ingredients , but they are not part of the Non-GMO project. Like Whole Foods, Trader Joe’s is not actively supporting mandatory labeling of GM foods under Prop 37, perhaps because it would disrupt their product lines. Several processed food products in Trader Joe’s stores that are not privately branded would likely require the new cautionary label under Prop 37,maceta cuadrada plastico not to mention all of the products under the Trader Joe’s line that will not meet the zero tolerance . The issue surrounding Prop 37 is similar to an earlier debate that took place in the 1990s over dairy products from cows treated with rBST . The U.S. FDA ruled that no mandatory labeling of products derived from cows receiving the growth hormone was necessary because the milk was indistinguishable from products derived from untreated herds. Then the state of Vermont passed a law requiring that milk from rBST treated cows be labeled to better provide consumers information. The Vermont legislation was based on “strong consumer interest” and the “public’s right to know.” Dairy manufacturers challenged the constitutionality of the Vermont law under the First Amendment and they won. The Second Circuit Court of Appeals struck down the Vermont law, ruling that labeling cannot be mandated just because some consumers are curious. The court ruled “were consumer interest alone sufficient, there is no end to the information that states could require manufacturers to disclose about their production methods”… “Instead, those consumers interested in such information should exercise the power of their purses by buying products from manufacturers who voluntarily reveal it.” . Instead of mandatory labeling, a non-rBST standard was voluntarily developed by the industry with specifications from the FDA. It has been largely applied to dairy products, giving consumers a choice; but unlike mandatory labeling, producers voluntarily responded to consumer demand for non-rBST milk, following a bottom-up process—it was not a mandate imposed on them by top-down regulations. There are a variety of international mandatory GM labeling programs differing by the products to which they are applied, the mandated adventitious threshold, and whether they apply to the “product” as a whole or to the “process” .

Table 3 summarizes the mandatory labeling laws of a select group of developed nations. As shown in the table, mandatory labeling of GM food exists and is enforced in places like Japan, the EU, South Korea, Australia, and New Zealand. Some developing or transition economies also have mandatory labeling but without strict enforcement. With mandatory labeling, consumers are not necessarily provided with greater choice at the food store. Furthermore, there is a substantial amount of GM food eaten in the EU and Japan that does not have to be labeled. These products include certain animal products, soya sauce and vegetable oils , among others. Internationally, the Codex Alimentarius Commission, an international standards-setting body for food, examined and debated GM food labeling for over twenty years without reaching any consensus. In 2011 a decision was eventually made, but the final text approved by all countries does not provide any recommendation as to the labeling of GM food. It only calls on countries to follow other Codex guidelines on food labeling . This non-endorsement means that countries using mandatory labeling could face legitimate claims of unfair trade restrictions resulting in a World Trade Organization dispute. A labeling initiative similar to California’s Prop 37 appeared on the ballot in Oregon in 2002. This initiative also proposed mandatory labeling, but defined an adventitious threshold of 0.1% per ingredient. Despite a claim of an overwhelming level of public support for GM labeling, the initiative ultimately failed with 70% voting “no.”Additionally, even if the measure had passed, it was unlikely that producers would have segregated GM foods from non-GM, non-organic, as the costs would have been prohibitive—especially for a relatively small state with a population fewer than four million. The bulk of private costs incurred as a result of labeling requirements are from efforts to prevent or limit mixing within the non-GM supply chain, known as identity preservation programs. The cost of any IP program depends critically on the level of the adventitious presence threshold specified in the labeling program. In the case of Prop 37 these costs would be incurred throughout the processed food industry. For instance, a firm marketing a wheat food product would incur costs to ensure its product did not contain trace amounts of soy, canola, or corn, because these grains all use the same grain handling and transport system. The goal of providing consumers with additional information and choice is only met when both product types are carried in food stores. In the EU, companies resorted to substituting ingredients to avoid the label, using lower quality and/or higher priced inputs, something that could also happen in California for processed products. EU consumers were not offered much new information, since no products carried a GM label after the introduction of mandatory labeling. In fact, the EU proponents of labeling are not satisfied with the existing EU regulations because of its exemptions and they have asked for an extension of labeling to include animal products.

Neither of those sources of water is subsidized to any significant degree

This highlights that the method used to define θfc in our study, while objective and tied strictly to soil moisture retention parameters, produced θfc estimates that are relatively conservative from a flow-based definition of θfc, as they are based on how a 1-cm slice of soil would drain. In a soil profile that has been deeply wetted, such as those used in this Ag-MAR modeling study, the defined θfc cannot be achieved by drainage alone within a reasonable time-frame, even at 10-cm depth in a 200-cm sandy loam profile . Thus, the corresponding time-to trafficability estimates should be interpreted as relatively conservative, especially for those soils with low plasticity indices such as sands and sandy loams. This is not to say, however, that the definitions used in this study are outside the norms of soil science. θfc is often defined with a standard tension . All textures but silt loam have estimated θfc values that correspond to this tension range . Finer-textured soils may still have some risk of compaction at the thresholds defined in this study, given their high plasticity indices and the relatively high Ksat estimates produced by the ROSETTA pedotransfer function for these textures . Similarly, while presence of a Bt horizon underlying various surface textures did not consistently delay time-to-trafficability, ROSETTA may overestimate the permeability of 2:1 clay enriched sub-soils occurring on, for example, stable river terraces above current floodplains,hydroponic container system especially in the eastern uplands of the San Joaquin Valley . Thus, these landscapes should be treated more cautiously if used for Ag-MAR during periods when trafficability is required, especially during low PET conditions.

For example, in an Australian study of Vertisol trafficability under irrigated cotton production, researchers concluded that risk-free trafficability only really existed at water contents near wilting point , which is equivalent to about 60% of the mean θfc for clays in our study . On the other hand, this contrasts sharply with a field study in the Netherlands which found that a heavy clay soil under pasture was trafficable at just 90 cm soil moisture tension based on observation of compaction patterns and tensiometer readings , which is moister than the wettest, commonly used tension-based definition of θfc . An additional uncertainty in trafficability and work ability research is the extent to which surficial trafficability and work ability moisture thresholds are sufficient to prevent detrimental subsoil compaction that requires more effort to ameliorate . Field validation studies that include modeling of soil moisture to predict suitable days for agricultural operations and that simultaneously examine full soil profile effects of wheel traffic occurring at or below these moisture thresholds have not yet been reported. A study of controlled traffic farming systems in California cotton production highlights this need, since bulk density increased to 25-cm depth while penetrometer resistance increased to at least 100-cm soil depth under wheel traffic in a sandy loam soil , but no operational decisions were guided by trafficability soil moisture thresholds in their study. Finally, the time-to-trafficability estimates are meant to guide operational decisions when crops are dormant or fields are fallow, given that root water uptake was intentionally neglected and only drainage and bare soil evaporation were considered in H1D simulations. For major perennial crops, end of dormancy typically ranges from mid February to late-April , spanning the time period addressed by this study. There are several reasons for omitting scenarios when root water uptake is active. First, Ag-MAR is recognized to be a risk to many actively growing crops due to the possibility of developing anoxic soil conditions . Second, for deep wetting events more generally, accurate root water uptake modeling requires knowledge of root depth distribution and crop canopy coverage. Third, the need for irrigation water may arise before the soil moisture trafficability threshold during active root water uptake for more sensitive crops or during specific periods of growth.

All of these considerations complicate the ability to provide a generalizable time-to-trafficability tool to growers that also accounts for crop root water uptake.A relationship between global warming and increased concentrations of greenhouse gases such as carbon dioxide , produced by the burning of fossil fuels, is suggested by much accumulating evidence. As far back as 1992, more than 150 governments attending the Rio Earth Summit signed the Framework Convention on Global Climate change. Article 2 states that the ”ultimate objective of this Convention … is to achieve … stabilization of greenhouse gas concentrations that would prevent dangerous anthropogenic interference with the climate system.” More than ten years later, the questions remain: how ”dangerous” are the consequences of anthropogenic interference, and how much ”stabilization” is justified? The economics literature so far has given mixed results with regards to the impact on agriculture.1 In the remainder of this section we give a brief overview of previous approaches to set the stage for our study. These can be divided into three broad categories, beginning with the agronomic approach, based on the use of agronomic models that simulate crop growth over the life cycle of the plant and measure the effect of changed climate conditions on crop yield and input requirements. For example, Adams relies on crop simulation models to derive the predicted change for both irrigated and rainfed wheat, corn, and soybeans. The predicted changes in yields are then combined with economic models of farm level crop choice, using linear or nonlinear programming.The analysis, however, usually considers variable but not fixed costs of production. It often turns out to be necessary to add artificial constraints to make the programming model solution replicate actual farmer behavior in the baseline period. Moreover, the analysis focuses on the agricultural sector, and ignores the linkages with the remainder of the economy which would make the input prices and input allocations to agriculture endogenous.

This is remedied in the computable general equilibriumapproach, which models agriculture in relation to the other major sectors of the economy and allows resources to move between sectors in response to economic incentives. An example is FARM, the eight-region CGE model of the world agricultural economy by the United State Department of Agriculture. However, while a CGE model has the advantages of making prices endogenous and accounting for inter-sectoral linkages, these come at the cost of quite drastic aggregation in which spatially and economically diverse sectors are characterized by a representative farm or firm. In summary, on the one hand the agronomic models do not fully capture the adaptation and mitigation strategies of farmers in the face of climate change, while on the other the CGE models are only appropriate to highly aggregated sectors of the economy. Mendelsohn, Nordhaus and Shaw provide an interesting middle ground, proposing what they call a Ricardian approach, essentially a hedonic model of farmland pricing, based on the notion that the value of a tract of land capitalizes the discounted value of all future profits or rents that can be derived from the land. The advantage of the hedonic approach is that it relies on the cross-sectional variation to identify the implicit choices of landowners regarding the allocation of their land among competing uses instead of directly modeling their decision. Further,planter pots drainage the hedonic function also allows one to calculate the direct impact on each farmer, county or state, in contrast to the highly aggregated structural CGE models. This is the approach we adopt, though with a number of innovations indicated below and explained in detail in succeeding sections. In this paper we resolve some of the differences in previous studies by estimating a hedonic equation for farmland value east of the 100th meridian, the boundary of the region in the United States where farming is possible without irrigation. The main contributions of the paper are: First, we incorporate climate differently than previous studies, by using transformations of the climatic variables suggested by the agronomic literature. The relationship between climatic variables and plant growth is highly nonlinear and our approach yields results that are consistent with the agronomic evidence. Second, we develop a new data set that integrates the spatial distribution of soil and climatic variables within a county with the help of a Landsat satellite scan of the contiguous United States. Third, we allow the error terms to be spatially correlated to obtain a more efficient estimator and correct t-values . Fourth, we present several sensitivity checks, and show that our results are robust to both different specifications and census years. We show that results remain similarly unchanged when we include state fixed effects to control for the influence of state-specific factors unrelated to climate, such as property taxes and crop subsidies. Finally, we evaluate potential impacts of warming using new climate projections from the most recent runs of two of the major global climate models. The paper is organized as follows. Section 2 outlines a model of farmland value with attention to issues raised by irrigation. Section 3 addresses spatial issues that arise in the definition and measurement of climatic and soil variables and in the correlation of error terms.

Section 4 presents our empirical results, including tests for spatial correlation and estimates of the hedonic regression coefficients and discusses a variety of tests of robustness of the results. Section 5 uses the results to generate estimates of regionally differentiated impacts of climate change on agriculture. Section 6 summarizes our conclusions. In this framework, climate variables play two different roles. Temperature is an exogenous shift variable in the production function; increases in temperature increase the demand for water as an input and they can raise or lower yield, depending on the size of the increase.Precipitation has a different role in irrigated areas than in dryland areas. In dryland areas, the water supply for crops comes from precipitation falling on the field before and during the growing season; in this case, the water supply is fixed by nature in any given year, and it comes with a price of zero. In irrigated areas, by contrast, the water supply is man-made, using local groundwater or surface water imported from somewhere else, it comes at a cost, and the quantity is endogenously determined. In terms of location, since the time of John Wesley Powell it has been common to take the 100th meridian as a rough approximation of the rainfall line in the US. To the east, rainfall generally exceeds 20 inches per year while, to the west, rainfall is generally less than 20 inches per year. Since virtually all traditional US crops require at least 20 inches of water to grow, the 100th meridian marks the boundary of the arid West, where farming is generally possible only with use of irrigation.3 Thus the 17 western states account for about 88% of the 150 million acre feet of irrigation water used annually in the U.S. The economic implications of the distinction between dryland and irrigated farming are discussed in detail by Cline , Darwin , and Schlenker et al. , and will be summarized briefly here. In addition to the fact that precipitation does not measure water supply in the arid West, the other distinctive feature is that, in irrigated areas, future changes in water costs, unlike other input costs, are not likely to be capitalized in future land prices in the same way as past cost changes were capitalized in past land values. Many of the major surface water supply projects in the western United States were developed by the US Bureau of Reclamation or the Army Corps of Engineers and involved a substantial subsidy to farmers. Depending on the age of the project, there is substantial variation in federal irrigation charges across different projects, and these are clearly capitalized into farmland values. Failure to account for subsidies could bias other regression coefficients, especially climatic coefficients that in turn are correlated with the access to irrigation. Aside from the federal projects, the remainder of the irrigation supply in the western states comes from groundwater or from non-federal surface water storage projects.Nevertheless, in the case of irrigation with non-federal surface water it still would be misleading to predict the economic cost of a change in precipitation on the basis of a hedonic regression of current farmland values.