Sustainable Development and Other Solutions to Pollution and Global Change
M.L. Brusseau , in Environmental and Pollution Science (Third Edition), 2019
32.4.1 General Concepts
Sustainable development is often discussed in terms of three key components or pillars, Economic, Environment, and Social. A Venn diagram of sustainability has been developed by practitioners to represent the three pillars and their interconnectedness (Fig. 32.4). In essence, the diagram presents the idea that sustainable development can be achieved when economic development is conducted in a manner that preserves and protects the environment and its resources while supporting individual and community well-being.
This concept has been presented in related form in business through the concept of the "triple bottom line." The term is a reference to the traditional focus of companies on the "bottom line," meaning profit or loss. It is now recognized that business entities need to consider environmental and social performance along with financial. As described by the International Institute for Sustainable Development, the broader concept of corporate social responsibility (CSR) has evolved in response to the recognition that corporations can no longer act as isolated economic entities operating in detachment from broader society. In essence, CSR promotes adopting business strategies and activities that meet the needs of the enterprise and its stakeholders today while protecting, sustaining, and enhancing the human and natural resources that will be needed in the future.
A related concept is that of "social license to operate." This refers to the degree of acceptance that local communities and stakeholders have for a particular organization and their operations. The concept has evolved recently from the broader concept of CSR. It promotes the idea that institutions and companies need "social permission" to conduct their business, as well as regulatory permission. Social license to operate has become a critical issue, for example, in the mining industry.
The Precautionary Principle is a guiding concept whose implementation could reduce the adverse impacts of humans on the environment and on their own well-being. This concept originated in the 1980s, and for example was embodied in Principle 15 of the Rio Declaration as noted before. The principle captures the idea that decision-makers need to anticipate and consider adverse effects of an action before it occurs. Additionally, the responsibility lies with the proponent of the action to establish that the proposed action will not or is unlikely to cause adverse impacts. This is in contrast to the standard approach wherein an action is implemented without prior full consideration of potential adverse impacts, and the public bear the burden of proof (and the adverse impacts). For example, as noted in Chapter 12, thousands of chemicals are routinely produced and used in consumer products. The impacts of many of these chemicals on the environment and human health have not been tested. The precautionary principle would deem that the chemical manufacturers would be required to test all chemicals for potential adverse impacts prior to their widespread use. While the formal concept of the precautionary principle has been around for just a few decades, the idea that caution should be exercised in making decisions has been present for a long time. This is embodied in common sayings such as "a stich in time saves nine," "look before you leap," and "better safe than sorry."
Green Development is a related term for sustainable development and captures the idea that environmental and social impacts need to be considered in development. Hand in hand with green development is the concept of Green Technology. This refers to the design and production of products that consider sustainability, cradle-to-grave management, reduced resource use, and other environmental impacts. The development and application of green technologies is needed to support green development. Some examples are as follows.
Green or Sustainable Architecture is focused on developing buildings that are built and operate efficiently to conserve resources. Primary considerations include (a) the use of building materials that are produced sustainably or are recycled, (b) energy efficiency and the use of renewable energy sources (e.g., solar panels, wind turbines, heat pumps), (c) on-site waste management (e.g., composting toilets, food waste composting gardens, gray water management), and (d) designed in harmony with its surroundings.
A subset of green architecture is Green Infrastructure, which is defined by the U.S. Environmental Protection Agency as a cost-effective, resilient approach to managing wet weather impacts that provides many community benefits. In short, it is a decentralized, dwelling-based approach to deal with stormwater runoff while also reducing water use and demand. Rainwater harvesting is a prime example of green infrastructure. Rainwater harvesting systems collect and store rainfall for later use. When designed appropriately, they reduce stormwater runoff and provide a source of water for the dwelling inhabitants. This practice could be particularly valuable in arid regions, where it could reduce demands on increasingly limited water supplies. See Fig. 32.5 for an example of a rainwater harvesting system.
Green Chemistry, or sustainable chemistry, is the design of chemical products and processes that reduce or eliminate the use or generation of hazardous substances. Green chemistry applies across the life cycle of a chemical product, including its design, manufacture, use, and ultimate disposal. Green chemistry attempts to reduce or prevent pollution by minimizing or eliminating hazardous chemical feedstocks, reagents, solvents, and products. The 12 principles of green chemistry are presented in Information Box 32.4.
Information Box 32.4 (Source: Available from: The U.S. Environmental Protection Agency, https://www.epa.gov/greenchemistry/basics-green-chemistry#definition).
Green Chemistry's 12 Principles
- 1.
-
Prevent waste: Design chemical syntheses to prevent waste. Leave no waste to treat or clean up.
- 2.
-
Maximize atom economy: Design syntheses so that the final product contains the maximum proportion of the starting materials. Waste few or no atoms.
- 3.
-
Design less hazardous chemical syntheses: Design syntheses to use and generate substances with little or no toxicity to either humans or the environment.
- 4.
-
Design safer chemicals and products: Design chemical products that are fully effective yet have little or no toxicity.
- 5.
-
Use safer solvents and reaction conditions: Avoid using solvents, separation agents, or other auxiliary chemicals. If you must use these chemicals, use safer ones.
- 6.
-
Increase energy efficiency: Run chemical reactions at room temperature and pressure whenever possible.
- 7.
-
Use renewable feedstocks: Use starting materials (also known as feedstocks) that are renewable rather than depletable. The source of renewable feedstocks is often agricultural products or the wastes of other processes; the source of depletable feedstocks is often fossil fuels (petroleum, natural gas, or coal) or mining operations.
- 8.
-
Avoid chemical derivatives: Avoid using blocking or protecting groups or any temporary modifications if possible. Derivatives use additional reagents and generate waste.
- 9.
-
Use catalysts, not stoichiometric reagents: Minimize waste by using catalytic reactions. Catalysts are effective in small amounts and can carry out a single reaction many times. They are preferable to stoichiometric reagents, which are used in excess and carry out a reaction only once.
- 10.
-
Design chemicals and products to degrade after use: Design chemical products to break down to innocuous substances after use so that they do not accumulate in the environment.
- 11.
-
Analyze in real time to prevent pollution: Include in-process, real-time monitoring and control during syntheses to minimize or eliminate the formation of byproducts.
- 12.
-
Minimize the potential for accidents: Design chemicals and their physical forms (solid, liquid, or gas) to minimize the potential for chemical accidents including explosions, fires, and releases to the environment.
A well-known example of green development and technology is green or Renewable Energy. This constitutes energy from sources that are naturally replenished over a human life scale. These include solar energy, wind energy, geothermal energy, hydropower (rivers, waves, and tides), and biofuels. The concept of renewable energy in terms of sustainability is discussed in the following subsection.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012814719100032X
Setting the Stage
Jeffrey O. Grady , in System Verification (Second Edition), 2016
1.3.4.4 Development Environment Integration
Figure 13 is an attempt to identify every conceivable development environment from which one could select the desired environment for a particular development activity. It does so through a three-dimensional Venn diagram showing combinations of different sequences (waterfall, spiral, and V), different phasing possibilities (rapid prototyping versus rigorous phasing), and one of three development attitudes (grand design, incremental, and evolutionary). The possibilities become even more numerous if we accept that three different delivery possibilities exist (high rate production, low volume/high cost, and one of a kind).
In the grand design approach, the team develops the product in a straight through process from beginning to end in accordance with a well-defined, predetermined plan. This attitude is normally well matched to the waterfall model.
In the incremental approach, the final requirements are defined before design work begins, but foreseeable problems, such as immature technology, prevent a straight-line approach to solution. Therefore, a series of two or more builds are planned, where the first satisfies the most pressing requirements for the product. Each incremental build permits the developer and customer to gain experience with the problem and refinement of the next design cycle, ever working toward the final design configuration. This approach is clearly coordinated with the spiral model.
In the evolutionary approach, we may be unclear about the requirements for the final configuration and conclude that we can only acquire sure knowledge of our requirements through experience with the product. This is, of course, a chicken-and-egg kind of problem, or what others might call a bootstrap problem. In order to understand the problem space, we have to experience the solution space. The overall evolutionary program is structured in a series of builds, and each build helps us to understand the final requirements more clearly as well as a more fitting design solution. Note the unique difference between the incremental and evolutionary approaches. In the incremental, we understand the final requirements and conclude that we cannot get there in one build (grand design). In the evolutionary case, we do not understand the final requirements and need experience with some design solution to help us understand them.
Earlier we said that an enterprise should have a generic process used to develop product and should repeat that process with incremental improvements as one element of their continuous improvement method. We could choose to close our minds to flexibility and interpret this to mean that we should pick one of the spaces of Figure 13 and apply only that environment in all possible situations. Alternatively, we can choose to allow our programs and development teams to apply an environment most suited to the product and development situation as a function of their unlimited choice, employing an organizational structure similar to the U.S. federal government relative to its states allowing programs some choice in selecting the precise model to be employed.
Figure 13 exposes us to a degree of complexity that we may not be comfortable with. We may be much happier with our ignorance and prefer relying on our historical work patterns. Some companies will reach this conclusion and find themselves in great trouble later. If there is anything we know about the future, it is that the pace of change will increase and the problems we must deal with will become more complex. A healthy firm interested in its future will evolve a capability that blends a generic process encouraging repetition of a standard process with a degree of flexibility with respect to allowable development environments.
These goals are not in conflict. In all of the environments discussed, we rely on requirements analysis as a prerequisite to design. In some cases, we know we may not be successful in one pass at the requirements analysis process, but the same techniques described in Chapter 2 for a functional modeling approach or for any of the other UADF will be effective in waterfall, spiral, or V development. No matter the combination of models applied, the resultant product should be verified to determine to what degree the synthesized product satisfies the requirements that drove the design process.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128042212000012
Item and Interface Qualification Verification
Jeffrey O. Grady , in System Verification (Second Edition), 2016
8.3.4.1 The Size of the Verification Process Design Chore
Before discussing the process of transforming sets of verification strings into verification tasks, let us first consider what the whole picture would look like for the verification tasks for one representative performance specification. Figure 54 illustrates a Venn diagram of all of the product requirements in Section 3 of a specification, where they have all been mapped to a verification task number (VTN), all of which are numbered VTN 1 to VTN 51. The paragraphs requiring no verification have been ignored in this figure.
A total of 13 requirements have been mapped to a test task (colored blue in the figure) assigned verification task number 1 and that task, like all of the others, will need a plan and procedure prepared that, when implemented, will produce convincing evidence of the degree of compliance of the product with the 13 requirements assigned to VTN 1. The same is true for the requirements mapped to the other 50 tasks for this item qualification verification work. Keep in mind that Figure 54 is but one example of how requirements might have been assigned to verification tasks. The possibilities are many.
Other than VTN 1, Figure 54 does not disclose how many requirements were assigned to the 51 tasks for the item. Table 2 shows an example of how the requirements could have been distributed between the 51 verification tasks in a four-column table listing the 51 VTN, and telling in each case the method that will be applied and the number of requirements (REQ) that were assigned to the task. For any one item and its specification, there are, of course, many possible outcomes of this effort and there is not necessarily only one right answer to the problem. People with experience and good engineering sense, motivated by management to devise a plan that will produce the best evidence of compliance consistent with available budget, will commonly come up with an acceptable proposed plan that can be reviewed, exposed to critical comment, and after some adjustment perhaps, be approved. Other persons doing this work might come up with a significantly different set of conclusions that might be every bit as effective.
Table 2. Sample Requirements Distribution by Task and Method
VTN | Method | REQ | VTN | Method | REQ | VTN | Method | REQ | VTN | Method | REQ |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | T | 13 | 14 | E | 2 | 27 | D | 3 | 40 | E | 1 |
2 | A | 1 | 15 | T | 2 | 28 | T | 2 | 41 | D | 1 |
3 | E | 1 | 16 | D | 5 | 29 | A | 1 | 42 | D | 2 |
4 | E | 2 | 17 | A | 1 | 30 | A | 12 | 43 | A | 4 |
5 | A | 3 | 18 | T | 10 | 31 | D | 1 | 44 | T | 4 |
6 | A | 4 | 19 | A | 2 | 32 | E | 1 | 45 | T | 2 |
7 | T | 2 | 20 | E | 2 | 33 | T | 1 | 46 | A | 1 |
8 | T | 2 | 21 | E | 2 | 34 | D | 3 | 47 | D | 1 |
9 | T | 3 | 22 | D | 1 | 35 | E | 1 | 48 | E | 1 |
10 | A | 5 | 23 | A | 3 | 36 | E | 3 | 49 | A | 6 |
11 | D | 2 | 24 | T | 3 | 37 | A | 8 | 50 | D | 2 |
12 | E | 2 | 25 | T | 2 | 38 | A | 2 | 51 | T | 2 |
13 | A | 4 | 26 | A | 2 | 39 | T | 1 |
As the reader can see, by adding up the number of requirements (REQ) for each task in Table 2, this specification includes a total of 147 requirements needing verification. There might be 23 other paragraphs in Section 3 that will not require verification of any kind. Table 3 extracts the number of tasks and requirements distributed in this example to the five methods identified in Table 2. The author does not suggest that either Figure 52 or Tables 2 and 3 offer a useful way to actually build a verification plan. Rather, these views of a single specification example are discussed only to expose the reader to the number of entities that must be considered while doing this work.
Table 3. Requirements Summary by Methods and Tasks
Method | Tasks | REQ |
---|---|---|
Analysis (A) | 16 | 59 |
Test (T) | 14 | 49 |
Examination (E) | 11 | 18 |
Demonstration (D) | 10 | 21 |
Special (S) | 0 | 0 |
51 | 147 |
We have been discussing only a single specification, and on a large program we may very well have to deal with over 100 product entities, other than the system specification, with both a performance and detail specification for each item. If each performance specification (which includes the system specification) contained 150 requirements needing verification on average and each detail specification 95, then that would result in 15,150 requirements (101 × 150) in 100 item performance specifications and the system specification, and 9,500 requirements (100 × 95) in the 100 detail specifications, for a total of 24,650 requirements on the program that will have to be verified. In some fashion, these 24,650 requirements will have to be assigned to some number of tasks of the five methodological kinds driving the development of the verification plans for system test and evaluation, item qualification, and item acceptance verification actions.
The first verification planning step on a real program is to determine how the requirements in the program specifications should be associated with the verification tasks. The good news is that this must not all be accomplished as a single task, and it need not be accomplished prior to release of a specification. This work is distributed over the span of time required to develop the several layers of entity specifications. We will develop the system specification first, followed by the item performance specifications, parts, materials, and process specifications, and finally the item detail specifications. Commonly, the item qualification verification design will be the first one accomplished in layers, followed by the development of the system test and evaluation design, and finally the detail specifications, delayed until during the design work.
For the one item expressed in Figure 54, there will have to be a total of 51 qualification verification task plans, 51 qualification verification task procedures, and 51 qualification verification task reports of the kinds noted in Table 3. Each of these documents should be formally reviewed and approved, placed under configuration control, and the masters protected from unauthorized change. Each specification will require this kind of verification work response with work related to the system specification flowing into system test and evaluation verification planning, work related to item performance specifications flowing into item qualification verification planning, and the verification work related to the item detail specifications flowing into item acceptance verification planning. On top of all that, there could be thousands of parts, materials, and process specifications on a program.
In general, the work can be focused fairly tightly on the item performance specification being prepared by those immediately responsible for that item, but there must be some system engineers on the program who take a broader view of the verification planning work to view and think across the system, providing an integrating and optimization influence – more on this later.
By now, it is hoped that the reader fully recognizes the large scope of the work that must be accomplished in verifying that a product complies with the requirements that drove the design work. Yes, it will cost time and money to do this work well, but if the product requirements analysis work was done poorly, then it is likely that these errors will be propagated throughout the program, leading to badly stated verification requirements, poorly done verification plans and procedures, design errors, and significant problems in verifying compliance, recognized at a time when the available budget and schedule slack is minimum. It is a reality that systems engineering work done well early in a program will reduce development cost, and most often system life cycle cost as well. So, if you want your enterprise to be a developer of affordable systems that customers long appreciate, do the requirements and synthesis work well; it will lead to affordable verification if the whole is managed well.
It is not necessary that everyone on a program understand how deep and broad the verification process scope can be, but it is essential that whoever has the overall responsibility for the verification work can see it very clearly and how it is partitioned into items, tasks, and responsibilities. This person must be able to interact with those responsible for parts of the overall work set and discuss problems being experienced with clear focus on how they fit into the whole. Depending on the number of items in a system that require qualification, it may be necessary for two or more layers of qualification principal engineers to be appointed. In such cases the principal engineer in each case should be familiar with the verification scope for the items for which they are responsible, from that item downward.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128042212000085
Review of Probability
Daniel S. Wilks , in Statistical Methods in the Atmospheric Sciences (Fourth Edition), 2019
2.2.2 The Sample Space
The sample space or event space is the set of all possible elementary events. Thus the sample space represents the universe of all possible outcomes or events. Equivalently, it is the largest possible compound event.
The relationships among events in a sample space can be represented geometrically, using what is called a Venn Diagram. Often the sample space is drawn as a rectangle and the events within it are drawn as circles, as in Figure 2.2a . Here the sample space is the rectangle labeled S, which might contain the set of possible precipitation outcomes for tomorrow. Four elementary events are depicted within the boundaries of the three circles. The "No precipitation" circle is drawn not overlapping the others because neither liquid nor frozen precipitation can occur if no precipitation occurs (i.e., in the absence of precipitation). The hatched area common to both "Liquid precipitation" and "Frozen precipitation" represents the event "both liquid and frozen precipitation." That part of S in Figure 2.2a not surrounded by circles is interpreted as representing the "null" event, which cannot occur.
It is not necessary to draw or think of circles in Venn diagrams to represent events. Figure 2.2b is an equivalent Venn diagram drawn using rectangles filling the entire sample space S. Drawn in this way, it is clear that S is composed of exactly four elementary events representing the full range of outcomes that may occur. Such a collection of all possible elementary (according to whatever working definition is current) events is called mutually exclusive and collectively exhaustive (MECE). Mutually exclusive means that no more than one of the events can occur. Collectively exhaustive means that at least one of the events will occur. A set of MECE events completely fills a sample space.
Note that Figure 2.2b could be modified to distinguish among precipitation amounts by adding a vertical line somewhere in the right-hand side of the rectangle. If the new rectangles on one side of this line were to represent precipitation of 0.01 in. or more, the rectangles on the other side would represent precipitation less than 0.01 in. The modified Venn diagram would then depict seven MECE events.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012815823400002X
Cladistics
Ian J. Kitching , ... David M. Williams , in Encyclopedia of Biodiversity, 2001
I.D. Cladograms and Phylogenetic trees
A cladogram is a diagram that summarizes a pattern of character distribution. Usually, a cladogram is drawn as a branching diagram (e.g., Fig. 1). The nodes denote a hierarchy of synapomorphies but there is no necessary implication of ancestry and descent. Cladograms may also be written in parenthetical notation or illustrated as a Venn diagram (Fig. 4a), which conveys the same grouping information as a branching diagram. In contrast, phylogenetic trees include a time axis and embody concepts of ancestry and descent with modification. In phylogenetic trees, the nodes denote ancestors (known or hypothetical) and the branches imply character change. Several phylogenetic trees may be compatible with the pattern of character distribution implied by a cladogram (Fig. 4b). Some of these trees allow the possibility that one or more taxa are ancestral to others. Only the phylogenetic tree that assumes all nodes represent hypothetical ancestors has the same topology as the cladogram. Thus, cladograms are more general than phylogenetic trees, which are precise statements about ancestry and descent.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B012226865200047X
Numerical Ecology
Pierre Legendre , Louis Legendre , in Developments in Environmental Modelling, 2012
10.7 Software
Functions in the R language are available to carry out all analyses described in this chapter.
1. Linear regression. — In package STATS, function lm() computes simple or multiple linear regression. Function step() used in conjunction with lm() offers model selection by AIC using a backward, forward, or stepwise strategy.
Functions lmodel2() of LMODEL2 and sma() of SMATR compute model II simple linear regressions. Function lmorigin() in APE computes regression through the origin with permutation test. Variance inflation factors are computed by function vif() of packages CAR and DAAG, applied to models computed by lm().
QR decomposition, carried out by function qr() of BASE, is an efficient method to compute coefficients in univariate or multivariate linear regression. Multivariate linear regression can be computed using either lm(), which takes either a single variable y or a whole matrix Y as the response data, or qr() after incrementing the explanatory matrix X with a column of 1's to estimate the intercept, producing matrix X +1. For example, the matrix of fitted values in multivariate regression can be computed as follows: fitted(lm(as.matrix(Y) ∼., data=X)), or qr.fitted(qr(X +1), as.matrix(Y)).
Ridge regression is available in functions lm.ridge() of MASS, ridge() of SURVIVAL, and penalized() of PENALIZED. Generalized linear models are computed by function glm() of STATS. Among the generalized linear models, only logistic regression is discussed in detail in the present chapter; it is computed by glm (y∼x, family=binomial(logit)). In STATS, function nls() computes nonlinear weighted least-squares estimates of the parameters of a nonlinear statistical model; optim() is a general-purpose nonlinear optimization function offering a variety of optimization algorithms.
- 2.
-
Partial regression and variation partitioning. — Partial linear regression can be computed by function rda() of VEGAN. varpart() of VEGAN is used for variation partitioning; plot.varpart() plots a Venn diagram with fixed circle and intersection sizes. A Venn diagram with proportional circle and intersection sizes can be obtained with function venneuler() of package VENNEULER*.
- 3.
-
Path analysis. — Structural equation modelling, which is a generalized form of analysis encompassing path analysis, is available in package SEM.
- 4.
-
Matrix comparisons. — Simple Mantel tests are found in functions mantel.test() of APE and mantel.rtest() of ADE4. For simple and partial Mantel tests, use mantel() of VEGAN, mantel() of ECODIST, mantel.test() and partial.mantel.test() of NCF. protest() in VEGAN computes the Procrustes permutation test. anosim() in VEGAN computes the Anosim test. The MRM() function in ECODIST carries out multiple regression on distance matrices.
- 5.
-
Fourth-corner problem. — Functions fourthcorner() and fourthcorner2() of ADE4 compute fourth-corner analysis; function rlq() of ADE4 carries out RLQ analysis.
- 6.
-
Miscellaneous methods. — Function poly() of STATS computes ordinary or orthogonal polynomials, the latter of the degree specified by the user, from a data vector. The resulting monomial vectors are normalized (i.e. scaled to length 1, eq. 2.7) and made to be orthogonal to one another. Several packages contain functions for spline and Lowess smoothing, e.g. stats, splines and DierckxSpline.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444538680500101
Estuarine and Coastal Ecosystem Modelling
J.N. King , in Treatise on Estuarine and Coastal Science, 2011
9.16.2 Generalized Component-Based Conceptual Model
Benthic flux
[1]
is a function of components, which are forced by unique mechanisms (identified in eqn [1] with numerical subscripts) or by a combination of mechanisms. Error ε is also represented in the conceptual model. Figure 1 represents this generalized component-based conceptual model as a Venn diagram. Benthic flux is represented as the area within the blue ellipse. Each component, represented as shapes in the Venn diagram, explains a portion of the total. Processes that interact are represented in Figure 1 by interactive shapes (q bf.1 and q bf.2, or q bf.2 and q bf.3). Processes that do not interact are represented by noninteractive shapes (q bf.1 and q bf.3). Processes that encompass other processes are represented by shapes that fall entirely within other shapes (q bf.4 inside q bf.3). Error ε is represented by the area within the ellipse that does not fall within a component shape.
Li et al. (1999) proposed the model
[2]
in which q bd is a linear sum of three components: a q bd component forced by a terrestrial hydraulic gradient q bd.thg, a q bd component forced by wave setup q bd.w.su, and a q bd component forced by tide q bd.t. Equation [2] is a specific implementation of eqn [1]. Li et al. did not mention limitations due to component interactions, or suggest other components that may also exist, such as many of the components detailed in Section 9.16.1.1.
Benthic flux components may interact linearly or nonlinearly. Equation [2] is a linear model. Where q bf components interact nonlinearly, q bf is not equivalent to the sum of the nonlinear components. For example, consider a q bf system composed of two components q bf.1 and q bf.2. Clearly, q bf = q bf.1 in the absence of q bf.2 and q bf = q bf.2 in the absence of q bf.1. If it is determined that q bf = q bf.1+q bf.2 is valid, the system is linear. Alternately, if it is determined that is valid, the system is nonlinear due to the squared term. The latter system is nonlinear because the system is not additive [f(x + y) ≠ f(x) + f(y)]. Conceptually, the presence of q bf.2 within the nonlinear system increases the magnitude of q bf.1.
Where q bf components oscillate, such as q bf.t and q bf forced by surface gravity waves q bf.w, component interactions may be constructive or destructive. For example, consider a linear q bf system composed of two components q bf.1 and q bf.2. If it is determined that q bf.1 = cos t and q bf.2 = cos (t + π) (i.e., the components are out of phase by π), then q bf = cost + cos (t + π) = 0. The two signals cancel and exhibit destructive interference. If it is determined that q bf.1 = cos t and q bf.2 = cos t (i.e., the components are in phase), then q bf = 2 cos t. The two signals reinforce and exhibit constructive interference.
Benthic flux components may be steady state or transient. Where a flux component q bf.1 is steady state, . Where a component q bf.1 is transient, .
Benthic flux components may be stochastic stationary or nonstationary. A process is stationary if the joint probability distribution remains constant when the distribution is shifted in space or time. Nonstationary processes have joint probability distributions that change when the distribution is shifted in time or space. A simple test for stationarity is that the mean and variance are not a function of time or position.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747112009177
The World as Human–Environment Systems
Per Becker , in Sustainability Science, 2014
Tools for Structural Models
Explicit structural models in human–environment systems can be constructed using a variety of tools that can be categorized in many different ways. For the purpose of this book, I suggest the following three main categories: (1) Venn diagramming; (2) network diagramming and (3) mapping.
The most basic form of explicit structural models in human–environment systems is referred to as Venn diagrams ( Figure 7.6); named after John Venn who was the first to conceive graphs representing all possible logical relationships between limited collections of sets of elements (Venn, 1881). Venn diagramming is in other words about conceptualizing groups of elements that share common properties and their relative relationships. Venn diagramming is commonly used in a range of disciplines spanning from pure mathematics to ecology and is useful in several ways for addressing issues of risk, resilience and sustainability. In addition to the utility of Venn diagrams for grasping conditional probability, which I do not address in this book, the method is also helpful to plot elements (most often agents in a community) and their most basic relationships (e.g. Wisner, 2006: 322; IFRC, 2007: 126–132). Venn diagramming is in this context about drawing circles that represent different elements of the human–environment system and arrange them internally to symbolize their relationships, where the size of the circles signify relative importance and their position signifies how closely they are related to each other. It is important to note that for this tool to have any meaning it must most often allow for participation of people involved in the part of the world that is represented by the resulting human–environment system.
The next step in terms of complexity of structural models in human–environment systems is network diagramming, which spans over a wide range of tools used in various disciplines. What holds these tools together is their focus on representing the structure of some parts of the world as networks of elements (also referred to as vertices or nodes) and dependencies between elements (also referred to as edges, arcs or lines). Examples of such tools include most forms of social network analysis (e.g. Wasserman & Faust, 1994) and graph theory (e.g. Balakrishnan & Ranganathan, 2012; Holmgren, 2006). What distinguishes network diagramming from Venn diagramming is that it allows for more details in terms of the description of each element and focuses explicitly on the dependencies between elements. Network diagramming can include both qualitative and quantitative analysis, and can involve different element variables and different types of dependencies in the same model. Although network diagramming can include geographical dependencies as long as they are defined as relationships between elements, the resulting model is not spatial. For that, another category of tools is needed.
Many central aspects of risk, resilience and sustainability are spatial in the sense of being associated to geographical locations. There are again many different tools developed to explicitly include such features in the construction of human–environment systems, here categorized as different forms of mapping. Mapping is about generating visual representations of observations using spatial relationships. It includes the practice of producing scaled representations of geographical features, such as the maps of traditional cartography, but is not limited to that. It spans from participatory mapping (e.g. Anderson & Holcombe, 2013: 165–207; IFAD, 2009) to high-tech geospatial data collection, such as LiDAR (a remote sensing technology illuminating a target with laser and analyzing the reflected light) (e.g. Heywood et al., 2006: 60–61). Mapping is thus in this context a broad term that encompasses all spatial organization of information concerning human–environment systems, using a wide range of symbolic representations from drawings to standardized map legends. Many tools have been developed over the years to guide various forms of mapping. Ground- and sketch mapping are two examples of participatory mapping tools that plot maps from the memory of involved community members (IFAD, 2009: 13–14) and Geographical Information Systems have opened up unprecedented opportunities for analyzing and visualizing geospatial information (e.g. Condorelli & Mussumeci, 2010; Dilekli & Rashed, 2007; Fedeski & Gwilliam, 2007; Gravelle & Mimura, 2008; Heywood et al., 2006; Tran et al., 2009).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444627094000075
Numerical Ecology
Pierre Legendre , Louis Legendre , in Developments in Environmental Modelling, 2012
14.5 Other eigenfunction-based methods of spatial analysis
This section describes additional statistical methods based on spatial eigenfunctions that were not covered in the previous sections.
1 Space-time interaction
A commonly used approach to test hypotheses about natural or man-made environmental changes, including climate change, is to sample portions of ecosystems repeatedly over time. This type of sampling is usually done without replication of sites; in this way, the sampling effort can be spent on maximizing the expanse of space covered by the study. If the sampling sites and times are represented by dummy variables or Helmert contrasts, as in paragraphs 3 and 4 of Subsection 11.1.10, one can use canonical analysis to study the effect of the sites on species composition while controlling for the effect of time, and vice versa. An important limit of this approach is that the interaction between space and time cannot be estimated for lack of replicates. Assessing that interaction is, however, of great interest in such studies because a significant interaction would indicate that the spatial structure of the univariate or multivariate response data has changed through time, and conversely that the temporal variations differed significantly among the sites, thus indicating, for example, the signature of climate change on ecosystems.
STI
Legendre et al. (2010) described a statistical method to analyse the interaction between the space (S) and time (T) factors in space-time studies without replication; the acronym of the method is STI (for space-time interaction). The method can be applied to multivariate response data, e.g. ecological community composition, through partial RDA. The method consists in representing the space and/or time factors by spatial and/or temporal eigenfunctions (MEM, Sections 14.1 and 14.2, or AEM, Section 14.3). It is not necessary to represent both space and time by eigenfunctions: for example, if there are many sites and only a few sampling times, e.g. 2 or 3, spatial relationships may be coded using spatial eigenfunctions and temporal relationships using dummy variables or Helmert contrasts. Coding the space and/or time factors by spatial and/or temporal eigenfunctions requires fewer coding variables than dummy variables or Helmert contrasts. The interaction can be represented by variables obtained by computing the Hadamard product of each eigenfunction that codes for space with each eigenfunction that codes for time. Enough degrees of freedom are saved to correctly estimate the residual fraction of variation and test the significance of the interaction term.
The above paper gives details about the computation method. The R package STI is available to carry out the calculations (Section 14.7). The paper also contains two applications to real species assemblage data: an analysis of Trichoptera (insects, 56 species) emerging from a stream and captured in 22 emergence traps during 100 days, grouped into 10 consecutive 10-day periods, and a study of four surveys conducted between 1982 and 1995 in the Barro Colorado Island permanent forest plot (315 species of trees). Another application is found in Laliberté et al. (2009) where tree seedling abundances at 40 sites along a transect in a temperate forest understory, monitored during a 9-year period, were analysed for space-time interaction. The analysis of spatio-temporal data is also discussed in Cressie & Wikle (2011).
2 Multiscale codependence analysis
A causal relationship between an explanatory (x) and a response variable (y) across space implies that the two variables are correlated. When the correlation between x and y is not significant, the causal hypothesis must be abandoned. Conversely, a significant correlation can be interpreted as support of the causal hypothesis that x may have an effect on y. Given the multiscale nature of ecological processes, one may wonder at which scales x is an important predictor of y. The same question can be asked about pairs of variables forming a bivariate time series; for simplicity, the presentation here will focus on space.
MCA
Guénard et al. (2010) developed multiscale codependence analysis (MCA) to address the above question and test the significance of the correlations between two variables at different spatial scales. The method is based on spatial eigenfunctions, MEM or AEM, which correspond to different and identifiable spatial scales: indeed, a Moran's I statistic (eq. 13.1) can be computed for each eigenfunction. If the sampling is regular along a transect, eq. 14.1 can be used to determine the wavelegths of the k eigenfunctions, which are assembled in a matrix called W, of size n × k. Correlation coefficients are computed between y and each of the k eigenfunctions, and written in a vector ryW of length k. Similarly, correlation coefficients are computed between x and each of the k eigenfunctions, and written in a vector rxW . The Hadamard product of the two vectors, ryW and rxW , is the vector of codependence coefficients, which reflect the strength of the x-y correlations at the different scales represented by the eigenfunctions in matrix W. Each codependence coefficient can be tested for significance using a Ï„ (tau) statistic obtained by computing the product of the t-statistics associated with the two correlation coefficients. The testing procedure is described in the paper. An R package is available for the calculations (Section 14.7).
In the above paper, the method was applied to model the river habitat of juvenile Atlantic salmon (parr). MCA showed that variables describing substrate composition of the river bed were the most influential predictors of parr abundance at the 0.4 – 4.1 km scales whereas mean channel depth was more influential at the 200 – 300 m scales. This example shows that when properly assessed, the multiscale structuring observed in nature may be used to refine our understanding of natural processes.
3 Estimating and controlling for spatial structure in modelling
The examples and applications reported in Sections 14.1 to 14.3 show that spatial eigenfunctions can efficiently model all kinds of spatial structures in data. Can they be used to find a solution to the problem described in Subsection 1.1.2, that spatial correlation inflates the level of type I error in tests of species-environment relationships in regression and canonical analysis?
A species-environment relationship after controlling for spatial structure can be represented by fraction [a] in a Venn diagram (e.g. Figs. 10.10) showing the partitioning of the variation of response data, univariate y or multivariate Y, with respect to environmental (left circle) and spatial variables (right circle). A real example is shown in Fig. 14.7. Using numerical simulations, Peres-Neto & Legendre (2010) showed that spatial eigenfunctions provided an effective answer to the problem. Firstly, one must determine if the spatial component of y or Y is significant. This can be done by regression of y, or canonical analysis of Y, against all MEM spatial predictors, or by univariate (for y) or multivariate (for Y) variogram analysis. Secondly, if the spatial component is significant, one can select a subset of spatial predictors, and use the environmental (X) and the selected spatial predictors (covariables W) in a partial regression (for y, Subsection 10.3.5) or partial canonical analysis (for Y, Subsection 11.1.6).
For the analysis of community composition data, the authors found that a species- by-species forward selection procedure, described in their paper, was to be preferred to a global, community-based selection. In this method, eigenfunctions are selected for each species independently, and the union of the selected sets is used as the matrix of MEM covariables in canonical analysis. This provides an effective method of control for type I error in the assessment of species-environment relationships. The paper also showed that polynomial regressors (Subsection 13.2.1) did not produce tests of significance with correct levels of type I error.
The Peres-Neto & Legendre (2010) paper provides theoretical support to the effect observed in Ecological application 14.4, that MEM used as covariables in canonical analysis effectively controlled for the spatial correlation observed in the species-environment relationship in the first part of the analysis of the mite data.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444538680500149
Participation
S. Hickey , U. Kothari , in International Encyclopedia of Human Geography, 2009
Participatory Methods: A Focus on Research
Although participatory development incorporates a range of processes and practices, tools, and techniques, it is most commonly associated with PRA. PRA can be used to identify and assess community needs and priorities for, and feasibility of, development activities as well as for monitoring and evaluating the impact of these. It can also inform continuous readjustment of the program as a consequence of the information gathered. A significant characteristic of PRA methods is the emphasis on visual methods of collecting and analyzing data, primarily for the purpose of ensuring that less literate people can be fully involved in the process. These methods include, diagramming (seasonal calendars, timelines, Venn diagrams, etc.), mental and social mapping and modeling, transects and historical timelines, ranking and scoring preferences, observation, focus groups, and role play.
Participatory training provides concrete skills in these methods as well as highlighting the potentials and limitations of these. Importantly, research training manuals emphasize the necessity of practitioners to adopt particular sensibilities to empower community members to express, share, enhance, and analyze their knowledge, including, for example, showing respect for local people and interest in what they know.
In this approach, informants are not objects of study, but are participants in the research process and therefore problems need to be understood from their point of view. A priority within participatory approaches is to limit the separation between the collection and analysis of information. Thus research is not a mechanical procedure of information gathering where data is collected in one place and then analyzed 'back home' but an iterative and flexible process where information is collected and analyzed in the 'field' and issues that arise feedback into the process. Although indigenous knowledge is prioritized, triangulation is a key component of the research process whereby information sources, informants, and methods are cross-checked to incorporate different people's perspectives and different methods. Indeed, one of the key principles of PRA is the offsetting of biases not only among informants but also among research facilitators. Participatory practitioners are encouraged to consider and address the different understandings of interviewer/interviewee about the purpose of the enquiry and in what ways their relationship shapes the kinds of data gathered. Furthermore, they need to be aware of the dynamics of group activities and to ensure that information generated by different social groups with potentially conflicting viewpoints have been adequately represented and to be cognizant of any groups that have been left out of the process and the power dynamics that shape different people's participation and interests. This has resulted in a version of participation geared toward knowledge production as a means of catalyzing social change and challenging dominant relations of power.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080449104001139
Source: https://www.sciencedirect.com/topics/earth-and-planetary-sciences/venn-diagrams
Posted by: shakitashakitatruppe0268243.blogspot.com